00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 618 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3284 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.138 Using shallow fetch with depth 1 00:00:00.139 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.139 > git --version # timeout=10 00:00:00.168 > git --version # 'git version 2.39.2' 00:00:00.168 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.196 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.196 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.932 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.942 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.957 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:04.957 > git config core.sparsecheckout # timeout=10 00:00:04.968 > git read-tree -mu HEAD # timeout=10 00:00:04.984 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:05.007 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:05.007 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.086 [Pipeline] Start of Pipeline 00:00:05.098 [Pipeline] library 00:00:05.099 Loading library shm_lib@master 00:00:05.099 Library shm_lib@master is cached. Copying from home. 00:00:05.116 [Pipeline] node 00:00:05.130 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.132 [Pipeline] { 00:00:05.140 [Pipeline] catchError 00:00:05.141 [Pipeline] { 00:00:05.151 [Pipeline] wrap 00:00:05.159 [Pipeline] { 00:00:05.166 [Pipeline] stage 00:00:05.167 [Pipeline] { (Prologue) 00:00:05.351 [Pipeline] sh 00:00:05.634 + logger -p user.info -t JENKINS-CI 00:00:05.652 [Pipeline] echo 00:00:05.654 Node: GP11 00:00:05.661 [Pipeline] sh 00:00:05.952 [Pipeline] setCustomBuildProperty 00:00:05.962 [Pipeline] echo 00:00:05.963 Cleanup processes 00:00:05.968 [Pipeline] sh 00:00:06.243 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.243 329630 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.256 [Pipeline] sh 00:00:06.532 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.532 ++ grep -v 'sudo pgrep' 00:00:06.532 ++ awk '{print $1}' 00:00:06.532 + sudo kill -9 00:00:06.532 + true 00:00:06.545 [Pipeline] cleanWs 00:00:06.554 [WS-CLEANUP] Deleting project workspace... 00:00:06.554 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.560 [WS-CLEANUP] done 00:00:06.564 [Pipeline] setCustomBuildProperty 00:00:06.576 [Pipeline] sh 00:00:06.852 + sudo git config --global --replace-all safe.directory '*' 00:00:06.924 [Pipeline] httpRequest 00:00:06.948 [Pipeline] echo 00:00:06.949 Sorcerer 10.211.164.101 is alive 00:00:06.957 [Pipeline] httpRequest 00:00:06.964 HttpMethod: GET 00:00:06.971 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.973 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.976 Response Code: HTTP/1.1 200 OK 00:00:06.977 Success: Status code 200 is in the accepted range: 200,404 00:00:06.977 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:09.042 [Pipeline] sh 00:00:09.322 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:09.336 [Pipeline] httpRequest 00:00:09.360 [Pipeline] echo 00:00:09.361 Sorcerer 10.211.164.101 is alive 00:00:09.368 [Pipeline] httpRequest 00:00:09.371 HttpMethod: GET 00:00:09.372 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:09.372 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:09.394 Response Code: HTTP/1.1 200 OK 00:00:09.394 Success: Status code 200 is in the accepted range: 200,404 00:00:09.394 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:17.780 [Pipeline] sh 00:01:18.058 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:20.598 [Pipeline] sh 00:01:20.878 + git -C spdk log --oneline -n5 00:01:20.878 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:20.878 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:20.878 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:20.878 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:20.878 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:20.896 [Pipeline] withCredentials 00:01:20.906 > git --version # timeout=10 00:01:20.917 > git --version # 'git version 2.39.2' 00:01:20.931 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:20.933 [Pipeline] { 00:01:20.941 [Pipeline] retry 00:01:20.943 [Pipeline] { 00:01:20.959 [Pipeline] sh 00:01:21.241 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:25.479 [Pipeline] } 00:01:25.505 [Pipeline] // retry 00:01:25.511 [Pipeline] } 00:01:25.532 [Pipeline] // withCredentials 00:01:25.543 [Pipeline] httpRequest 00:01:25.561 [Pipeline] echo 00:01:25.563 Sorcerer 10.211.164.101 is alive 00:01:25.573 [Pipeline] httpRequest 00:01:25.578 HttpMethod: GET 00:01:25.579 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.579 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:25.580 Response Code: HTTP/1.1 200 OK 00:01:25.580 Success: Status code 200 is in the accepted range: 200,404 00:01:25.581 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:30.087 [Pipeline] sh 00:01:30.367 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:32.287 [Pipeline] sh 00:01:32.569 + git -C dpdk log --oneline -n5 00:01:32.569 eeb0605f11 version: 23.11.0 00:01:32.569 238778122a doc: update release notes for 23.11 00:01:32.569 46aa6b3cfc doc: fix description of RSS features 00:01:32.569 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:32.569 7e421ae345 devtools: support skipping forbid rule check 00:01:32.579 [Pipeline] } 00:01:32.590 [Pipeline] // stage 00:01:32.597 [Pipeline] stage 00:01:32.599 [Pipeline] { (Prepare) 00:01:32.619 [Pipeline] writeFile 00:01:32.635 [Pipeline] sh 00:01:32.918 + logger -p user.info -t JENKINS-CI 00:01:32.930 [Pipeline] sh 00:01:33.211 + logger -p user.info -t JENKINS-CI 00:01:33.224 [Pipeline] sh 00:01:33.507 + cat autorun-spdk.conf 00:01:33.507 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.507 SPDK_TEST_NVMF=1 00:01:33.507 SPDK_TEST_NVME_CLI=1 00:01:33.507 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.507 SPDK_TEST_NVMF_NICS=e810 00:01:33.507 SPDK_TEST_VFIOUSER=1 00:01:33.507 SPDK_RUN_UBSAN=1 00:01:33.507 NET_TYPE=phy 00:01:33.507 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:33.507 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.515 RUN_NIGHTLY=1 00:01:33.521 [Pipeline] readFile 00:01:33.548 [Pipeline] withEnv 00:01:33.550 [Pipeline] { 00:01:33.560 [Pipeline] sh 00:01:33.834 + set -ex 00:01:33.834 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:33.834 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:33.834 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.834 ++ SPDK_TEST_NVMF=1 00:01:33.834 ++ SPDK_TEST_NVME_CLI=1 00:01:33.834 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.834 ++ SPDK_TEST_NVMF_NICS=e810 00:01:33.834 ++ SPDK_TEST_VFIOUSER=1 00:01:33.834 ++ SPDK_RUN_UBSAN=1 00:01:33.834 ++ NET_TYPE=phy 00:01:33.834 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:33.834 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:33.834 ++ RUN_NIGHTLY=1 00:01:33.834 + case $SPDK_TEST_NVMF_NICS in 00:01:33.834 + DRIVERS=ice 00:01:33.834 + [[ tcp == \r\d\m\a ]] 00:01:33.834 + [[ -n ice ]] 00:01:33.834 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:33.834 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:33.834 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:33.834 rmmod: ERROR: Module irdma is not currently loaded 00:01:33.834 rmmod: ERROR: Module i40iw is not currently loaded 00:01:33.834 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:33.834 + true 00:01:33.834 + for D in $DRIVERS 00:01:33.834 + sudo modprobe ice 00:01:33.834 + exit 0 00:01:33.843 [Pipeline] } 00:01:33.860 [Pipeline] // withEnv 00:01:33.865 [Pipeline] } 00:01:33.879 [Pipeline] // stage 00:01:33.887 [Pipeline] catchError 00:01:33.889 [Pipeline] { 00:01:33.898 [Pipeline] timeout 00:01:33.898 Timeout set to expire in 50 min 00:01:33.899 [Pipeline] { 00:01:33.909 [Pipeline] stage 00:01:33.910 [Pipeline] { (Tests) 00:01:33.920 [Pipeline] sh 00:01:34.198 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.198 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.198 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.198 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:34.198 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.198 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:34.198 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:34.198 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:34.198 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:34.198 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:34.198 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:34.198 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:34.198 + source /etc/os-release 00:01:34.198 ++ NAME='Fedora Linux' 00:01:34.198 ++ VERSION='38 (Cloud Edition)' 00:01:34.198 ++ ID=fedora 00:01:34.198 ++ VERSION_ID=38 00:01:34.198 ++ VERSION_CODENAME= 00:01:34.198 ++ PLATFORM_ID=platform:f38 00:01:34.198 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:34.198 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.198 ++ LOGO=fedora-logo-icon 00:01:34.198 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:34.198 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.198 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:34.198 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.198 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.198 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.198 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:34.198 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.198 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:34.198 ++ SUPPORT_END=2024-05-14 00:01:34.198 ++ VARIANT='Cloud Edition' 00:01:34.198 ++ VARIANT_ID=cloud 00:01:34.198 + uname -a 00:01:34.198 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:34.198 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:35.132 Hugepages 00:01:35.132 node hugesize free / total 00:01:35.132 node0 1048576kB 0 / 0 00:01:35.132 node0 2048kB 0 / 0 00:01:35.132 node1 1048576kB 0 / 0 00:01:35.132 node1 2048kB 0 / 0 00:01:35.132 00:01:35.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.132 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:35.132 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:35.132 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:35.132 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:35.132 + rm -f /tmp/spdk-ld-path 00:01:35.132 + source autorun-spdk.conf 00:01:35.132 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.132 ++ SPDK_TEST_NVMF=1 00:01:35.132 ++ SPDK_TEST_NVME_CLI=1 00:01:35.132 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.132 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.132 ++ SPDK_TEST_VFIOUSER=1 00:01:35.132 ++ SPDK_RUN_UBSAN=1 00:01:35.132 ++ NET_TYPE=phy 00:01:35.132 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.132 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.132 ++ RUN_NIGHTLY=1 00:01:35.132 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:35.132 + [[ -n '' ]] 00:01:35.132 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.391 + for M in /var/spdk/build-*-manifest.txt 00:01:35.391 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:35.391 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.391 + for M in /var/spdk/build-*-manifest.txt 00:01:35.391 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:35.391 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:35.391 ++ uname 00:01:35.391 + [[ Linux == \L\i\n\u\x ]] 00:01:35.391 + sudo dmesg -T 00:01:35.391 + sudo dmesg --clear 00:01:35.391 + dmesg_pid=330331 00:01:35.391 + [[ Fedora Linux == FreeBSD ]] 00:01:35.391 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.391 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:35.391 + sudo dmesg -Tw 00:01:35.391 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:35.391 + [[ -x /usr/src/fio-static/fio ]] 00:01:35.391 + export FIO_BIN=/usr/src/fio-static/fio 00:01:35.391 + FIO_BIN=/usr/src/fio-static/fio 00:01:35.391 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:35.391 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:35.391 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:35.391 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.391 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:35.391 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:35.391 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.391 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:35.391 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.391 Test configuration: 00:01:35.391 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.391 SPDK_TEST_NVMF=1 00:01:35.391 SPDK_TEST_NVME_CLI=1 00:01:35.391 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.391 SPDK_TEST_NVMF_NICS=e810 00:01:35.391 SPDK_TEST_VFIOUSER=1 00:01:35.391 SPDK_RUN_UBSAN=1 00:01:35.391 NET_TYPE=phy 00:01:35.391 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.391 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.391 RUN_NIGHTLY=1 16:53:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:35.391 16:53:51 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.391 16:53:51 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.391 16:53:51 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.391 16:53:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.391 16:53:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.391 16:53:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.391 16:53:51 -- paths/export.sh@5 -- $ export PATH 00:01:35.391 16:53:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.391 16:53:51 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:35.391 16:53:51 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:35.391 16:53:51 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721487231.XXXXXX 00:01:35.391 16:53:51 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721487231.JPxglM 00:01:35.392 16:53:51 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.392 16:53:51 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:35.392 16:53:51 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:35.392 16:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.392 16:53:51 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:35.392 16:53:51 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:35.392 16:53:51 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:35.392 16:53:51 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.392 16:53:51 -- spdk/autobuild.sh@16 -- $ date -u 00:01:35.392 Sat Jul 20 02:53:51 PM UTC 2024 00:01:35.392 16:53:51 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:35.392 LTS-59-g4b94202c6 00:01:35.392 16:53:51 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:35.392 16:53:51 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:35.392 16:53:51 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:35.392 16:53:51 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:35.392 16:53:51 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:35.392 16:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.392 ************************************ 00:01:35.392 START TEST ubsan 00:01:35.392 ************************************ 00:01:35.392 16:53:51 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:35.392 using ubsan 00:01:35.392 00:01:35.392 real 0m0.000s 00:01:35.392 user 0m0.000s 00:01:35.392 sys 0m0.000s 00:01:35.392 16:53:51 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:35.392 16:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.392 ************************************ 00:01:35.392 END TEST ubsan 00:01:35.392 ************************************ 00:01:35.392 16:53:51 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:35.392 16:53:51 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:35.392 16:53:51 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:35.392 16:53:51 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:35.392 16:53:51 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:35.392 16:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.392 ************************************ 00:01:35.392 START TEST build_native_dpdk 00:01:35.392 ************************************ 00:01:35.392 16:53:51 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:35.392 16:53:51 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:35.392 16:53:51 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:35.392 16:53:51 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:35.392 16:53:51 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:35.392 16:53:51 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:35.392 16:53:51 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:35.392 16:53:51 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:35.392 16:53:51 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:35.392 16:53:51 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:35.392 16:53:51 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:35.392 16:53:51 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:35.392 16:53:51 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:35.392 16:53:51 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.392 16:53:51 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.392 16:53:51 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:35.392 16:53:51 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.392 16:53:51 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:35.392 eeb0605f11 version: 23.11.0 00:01:35.392 238778122a doc: update release notes for 23.11 00:01:35.392 46aa6b3cfc doc: fix description of RSS features 00:01:35.392 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:35.392 7e421ae345 devtools: support skipping forbid rule check 00:01:35.392 16:53:51 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:35.392 16:53:51 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:35.392 16:53:51 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:35.392 16:53:51 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:35.392 16:53:51 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:35.392 16:53:51 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:35.392 16:53:51 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:35.392 16:53:51 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:35.392 16:53:51 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:35.392 16:53:51 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:35.392 16:53:51 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:35.392 16:53:51 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:35.392 16:53:51 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:35.392 16:53:51 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:35.392 16:53:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:35.392 16:53:51 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:35.392 16:53:51 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:35.392 16:53:51 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:35.392 16:53:51 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:35.393 16:53:51 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:35.393 16:53:51 -- scripts/common.sh@343 -- $ case "$op" in 00:01:35.393 16:53:51 -- scripts/common.sh@344 -- $ : 1 00:01:35.393 16:53:51 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:35.393 16:53:51 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:35.393 16:53:51 -- scripts/common.sh@364 -- $ decimal 23 00:01:35.393 16:53:51 -- scripts/common.sh@352 -- $ local d=23 00:01:35.393 16:53:51 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:35.393 16:53:51 -- scripts/common.sh@354 -- $ echo 23 00:01:35.393 16:53:51 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:35.393 16:53:51 -- scripts/common.sh@365 -- $ decimal 21 00:01:35.393 16:53:51 -- scripts/common.sh@352 -- $ local d=21 00:01:35.393 16:53:51 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:35.393 16:53:51 -- scripts/common.sh@354 -- $ echo 21 00:01:35.393 16:53:51 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:35.393 16:53:51 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:35.393 16:53:51 -- scripts/common.sh@366 -- $ return 1 00:01:35.393 16:53:51 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:35.393 patching file config/rte_config.h 00:01:35.393 Hunk #1 succeeded at 60 (offset 1 line). 00:01:35.393 16:53:51 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:35.393 16:53:51 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:35.393 16:53:51 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:35.393 16:53:51 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:35.393 16:53:51 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:39.581 The Meson build system 00:01:39.581 Version: 1.3.1 00:01:39.581 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:39.581 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:39.581 Build type: native build 00:01:39.581 Program cat found: YES (/usr/bin/cat) 00:01:39.581 Project name: DPDK 00:01:39.581 Project version: 23.11.0 00:01:39.581 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.581 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:39.581 Host machine cpu family: x86_64 00:01:39.581 Host machine cpu: x86_64 00:01:39.581 Message: ## Building in Developer Mode ## 00:01:39.581 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:39.581 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:39.581 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:39.581 Program python3 found: YES (/usr/bin/python3) 00:01:39.581 Program cat found: YES (/usr/bin/cat) 00:01:39.581 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:39.581 Compiler for C supports arguments -march=native: YES 00:01:39.581 Checking for size of "void *" : 8 00:01:39.581 Checking for size of "void *" : 8 (cached) 00:01:39.581 Library m found: YES 00:01:39.581 Library numa found: YES 00:01:39.581 Has header "numaif.h" : YES 00:01:39.581 Library fdt found: NO 00:01:39.581 Library execinfo found: NO 00:01:39.581 Has header "execinfo.h" : YES 00:01:39.581 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.581 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:39.581 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:39.581 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:39.581 Run-time dependency openssl found: YES 3.0.9 00:01:39.581 Run-time dependency libpcap found: YES 1.10.4 00:01:39.581 Has header "pcap.h" with dependency libpcap: YES 00:01:39.581 Compiler for C supports arguments -Wcast-qual: YES 00:01:39.581 Compiler for C supports arguments -Wdeprecated: YES 00:01:39.581 Compiler for C supports arguments -Wformat: YES 00:01:39.581 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:39.581 Compiler for C supports arguments -Wformat-security: NO 00:01:39.581 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.581 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:39.581 Compiler for C supports arguments -Wnested-externs: YES 00:01:39.581 Compiler for C supports arguments -Wold-style-definition: YES 00:01:39.581 Compiler for C supports arguments -Wpointer-arith: YES 00:01:39.581 Compiler for C supports arguments -Wsign-compare: YES 00:01:39.581 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:39.581 Compiler for C supports arguments -Wundef: YES 00:01:39.581 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.581 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:39.581 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:39.581 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.581 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:39.581 Program objdump found: YES (/usr/bin/objdump) 00:01:39.581 Compiler for C supports arguments -mavx512f: YES 00:01:39.581 Checking if "AVX512 checking" compiles: YES 00:01:39.581 Fetching value of define "__SSE4_2__" : 1 00:01:39.581 Fetching value of define "__AES__" : 1 00:01:39.581 Fetching value of define "__AVX__" : 1 00:01:39.581 Fetching value of define "__AVX2__" : (undefined) 00:01:39.581 Fetching value of define "__AVX512BW__" : (undefined) 00:01:39.581 Fetching value of define "__AVX512CD__" : (undefined) 00:01:39.581 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:39.581 Fetching value of define "__AVX512F__" : (undefined) 00:01:39.581 Fetching value of define "__AVX512VL__" : (undefined) 00:01:39.581 Fetching value of define "__PCLMUL__" : 1 00:01:39.581 Fetching value of define "__RDRND__" : 1 00:01:39.581 Fetching value of define "__RDSEED__" : (undefined) 00:01:39.581 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:39.581 Fetching value of define "__znver1__" : (undefined) 00:01:39.581 Fetching value of define "__znver2__" : (undefined) 00:01:39.581 Fetching value of define "__znver3__" : (undefined) 00:01:39.581 Fetching value of define "__znver4__" : (undefined) 00:01:39.581 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:39.581 Message: lib/log: Defining dependency "log" 00:01:39.581 Message: lib/kvargs: Defining dependency "kvargs" 00:01:39.581 Message: lib/telemetry: Defining dependency "telemetry" 00:01:39.581 Checking for function "getentropy" : NO 00:01:39.581 Message: lib/eal: Defining dependency "eal" 00:01:39.581 Message: lib/ring: Defining dependency "ring" 00:01:39.581 Message: lib/rcu: Defining dependency "rcu" 00:01:39.581 Message: lib/mempool: Defining dependency "mempool" 00:01:39.581 Message: lib/mbuf: Defining dependency "mbuf" 00:01:39.581 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:39.581 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:39.581 Compiler for C supports arguments -mpclmul: YES 00:01:39.581 Compiler for C supports arguments -maes: YES 00:01:39.581 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:39.581 Compiler for C supports arguments -mavx512bw: YES 00:01:39.581 Compiler for C supports arguments -mavx512dq: YES 00:01:39.581 Compiler for C supports arguments -mavx512vl: YES 00:01:39.581 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:39.581 Compiler for C supports arguments -mavx2: YES 00:01:39.581 Compiler for C supports arguments -mavx: YES 00:01:39.581 Message: lib/net: Defining dependency "net" 00:01:39.581 Message: lib/meter: Defining dependency "meter" 00:01:39.581 Message: lib/ethdev: Defining dependency "ethdev" 00:01:39.581 Message: lib/pci: Defining dependency "pci" 00:01:39.581 Message: lib/cmdline: Defining dependency "cmdline" 00:01:39.581 Message: lib/metrics: Defining dependency "metrics" 00:01:39.581 Message: lib/hash: Defining dependency "hash" 00:01:39.581 Message: lib/timer: Defining dependency "timer" 00:01:39.581 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:39.581 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:39.581 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:39.581 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:39.581 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:39.581 Message: lib/acl: Defining dependency "acl" 00:01:39.581 Message: lib/bbdev: Defining dependency "bbdev" 00:01:39.581 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:39.581 Run-time dependency libelf found: YES 0.190 00:01:39.581 Message: lib/bpf: Defining dependency "bpf" 00:01:39.581 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:39.581 Message: lib/compressdev: Defining dependency "compressdev" 00:01:39.581 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:39.581 Message: lib/distributor: Defining dependency "distributor" 00:01:39.581 Message: lib/dmadev: Defining dependency "dmadev" 00:01:39.581 Message: lib/efd: Defining dependency "efd" 00:01:39.581 Message: lib/eventdev: Defining dependency "eventdev" 00:01:39.581 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:39.581 Message: lib/gpudev: Defining dependency "gpudev" 00:01:39.581 Message: lib/gro: Defining dependency "gro" 00:01:39.581 Message: lib/gso: Defining dependency "gso" 00:01:39.581 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:39.581 Message: lib/jobstats: Defining dependency "jobstats" 00:01:39.581 Message: lib/latencystats: Defining dependency "latencystats" 00:01:39.581 Message: lib/lpm: Defining dependency "lpm" 00:01:39.581 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:39.581 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:39.581 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:39.581 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:39.581 Message: lib/member: Defining dependency "member" 00:01:39.581 Message: lib/pcapng: Defining dependency "pcapng" 00:01:39.581 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:39.581 Message: lib/power: Defining dependency "power" 00:01:39.581 Message: lib/rawdev: Defining dependency "rawdev" 00:01:39.581 Message: lib/regexdev: Defining dependency "regexdev" 00:01:39.581 Message: lib/mldev: Defining dependency "mldev" 00:01:39.581 Message: lib/rib: Defining dependency "rib" 00:01:39.581 Message: lib/reorder: Defining dependency "reorder" 00:01:39.581 Message: lib/sched: Defining dependency "sched" 00:01:39.582 Message: lib/security: Defining dependency "security" 00:01:39.582 Message: lib/stack: Defining dependency "stack" 00:01:39.582 Has header "linux/userfaultfd.h" : YES 00:01:39.582 Has header "linux/vduse.h" : YES 00:01:39.582 Message: lib/vhost: Defining dependency "vhost" 00:01:39.582 Message: lib/ipsec: Defining dependency "ipsec" 00:01:39.582 Message: lib/pdcp: Defining dependency "pdcp" 00:01:39.582 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:39.582 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:39.582 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:39.582 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:39.582 Message: lib/fib: Defining dependency "fib" 00:01:39.582 Message: lib/port: Defining dependency "port" 00:01:39.582 Message: lib/pdump: Defining dependency "pdump" 00:01:39.582 Message: lib/table: Defining dependency "table" 00:01:39.582 Message: lib/pipeline: Defining dependency "pipeline" 00:01:39.582 Message: lib/graph: Defining dependency "graph" 00:01:39.582 Message: lib/node: Defining dependency "node" 00:01:41.492 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:41.492 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:41.492 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:41.492 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:41.492 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:41.492 Compiler for C supports arguments -Wno-unused-value: YES 00:01:41.492 Compiler for C supports arguments -Wno-format: YES 00:01:41.492 Compiler for C supports arguments -Wno-format-security: YES 00:01:41.492 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:41.492 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:41.492 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:41.492 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:41.492 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:41.492 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:41.492 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:41.492 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:41.492 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:41.492 Has header "sys/epoll.h" : YES 00:01:41.492 Program doxygen found: YES (/usr/bin/doxygen) 00:01:41.492 Configuring doxy-api-html.conf using configuration 00:01:41.492 Configuring doxy-api-man.conf using configuration 00:01:41.492 Program mandb found: YES (/usr/bin/mandb) 00:01:41.492 Program sphinx-build found: NO 00:01:41.492 Configuring rte_build_config.h using configuration 00:01:41.492 Message: 00:01:41.492 ================= 00:01:41.492 Applications Enabled 00:01:41.492 ================= 00:01:41.492 00:01:41.492 apps: 00:01:41.492 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:41.492 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:41.492 test-pmd, test-regex, test-sad, test-security-perf, 00:01:41.492 00:01:41.492 Message: 00:01:41.492 ================= 00:01:41.492 Libraries Enabled 00:01:41.492 ================= 00:01:41.492 00:01:41.492 libs: 00:01:41.492 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:41.492 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:41.492 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:41.492 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:41.492 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:41.492 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:41.492 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:41.492 00:01:41.492 00:01:41.492 Message: 00:01:41.492 =============== 00:01:41.492 Drivers Enabled 00:01:41.492 =============== 00:01:41.492 00:01:41.492 common: 00:01:41.492 00:01:41.492 bus: 00:01:41.492 pci, vdev, 00:01:41.492 mempool: 00:01:41.492 ring, 00:01:41.492 dma: 00:01:41.492 00:01:41.492 net: 00:01:41.492 i40e, 00:01:41.492 raw: 00:01:41.492 00:01:41.492 crypto: 00:01:41.492 00:01:41.492 compress: 00:01:41.492 00:01:41.492 regex: 00:01:41.492 00:01:41.492 ml: 00:01:41.492 00:01:41.492 vdpa: 00:01:41.492 00:01:41.492 event: 00:01:41.492 00:01:41.492 baseband: 00:01:41.492 00:01:41.492 gpu: 00:01:41.492 00:01:41.492 00:01:41.492 Message: 00:01:41.492 ================= 00:01:41.492 Content Skipped 00:01:41.492 ================= 00:01:41.492 00:01:41.492 apps: 00:01:41.492 00:01:41.492 libs: 00:01:41.492 00:01:41.492 drivers: 00:01:41.492 common/cpt: not in enabled drivers build config 00:01:41.492 common/dpaax: not in enabled drivers build config 00:01:41.492 common/iavf: not in enabled drivers build config 00:01:41.492 common/idpf: not in enabled drivers build config 00:01:41.492 common/mvep: not in enabled drivers build config 00:01:41.492 common/octeontx: not in enabled drivers build config 00:01:41.492 bus/auxiliary: not in enabled drivers build config 00:01:41.492 bus/cdx: not in enabled drivers build config 00:01:41.492 bus/dpaa: not in enabled drivers build config 00:01:41.492 bus/fslmc: not in enabled drivers build config 00:01:41.492 bus/ifpga: not in enabled drivers build config 00:01:41.492 bus/platform: not in enabled drivers build config 00:01:41.492 bus/vmbus: not in enabled drivers build config 00:01:41.492 common/cnxk: not in enabled drivers build config 00:01:41.492 common/mlx5: not in enabled drivers build config 00:01:41.492 common/nfp: not in enabled drivers build config 00:01:41.492 common/qat: not in enabled drivers build config 00:01:41.492 common/sfc_efx: not in enabled drivers build config 00:01:41.492 mempool/bucket: not in enabled drivers build config 00:01:41.492 mempool/cnxk: not in enabled drivers build config 00:01:41.492 mempool/dpaa: not in enabled drivers build config 00:01:41.492 mempool/dpaa2: not in enabled drivers build config 00:01:41.492 mempool/octeontx: not in enabled drivers build config 00:01:41.492 mempool/stack: not in enabled drivers build config 00:01:41.492 dma/cnxk: not in enabled drivers build config 00:01:41.492 dma/dpaa: not in enabled drivers build config 00:01:41.492 dma/dpaa2: not in enabled drivers build config 00:01:41.492 dma/hisilicon: not in enabled drivers build config 00:01:41.492 dma/idxd: not in enabled drivers build config 00:01:41.492 dma/ioat: not in enabled drivers build config 00:01:41.492 dma/skeleton: not in enabled drivers build config 00:01:41.492 net/af_packet: not in enabled drivers build config 00:01:41.492 net/af_xdp: not in enabled drivers build config 00:01:41.492 net/ark: not in enabled drivers build config 00:01:41.492 net/atlantic: not in enabled drivers build config 00:01:41.492 net/avp: not in enabled drivers build config 00:01:41.492 net/axgbe: not in enabled drivers build config 00:01:41.492 net/bnx2x: not in enabled drivers build config 00:01:41.492 net/bnxt: not in enabled drivers build config 00:01:41.493 net/bonding: not in enabled drivers build config 00:01:41.493 net/cnxk: not in enabled drivers build config 00:01:41.493 net/cpfl: not in enabled drivers build config 00:01:41.493 net/cxgbe: not in enabled drivers build config 00:01:41.493 net/dpaa: not in enabled drivers build config 00:01:41.493 net/dpaa2: not in enabled drivers build config 00:01:41.493 net/e1000: not in enabled drivers build config 00:01:41.493 net/ena: not in enabled drivers build config 00:01:41.493 net/enetc: not in enabled drivers build config 00:01:41.493 net/enetfec: not in enabled drivers build config 00:01:41.493 net/enic: not in enabled drivers build config 00:01:41.493 net/failsafe: not in enabled drivers build config 00:01:41.493 net/fm10k: not in enabled drivers build config 00:01:41.493 net/gve: not in enabled drivers build config 00:01:41.493 net/hinic: not in enabled drivers build config 00:01:41.493 net/hns3: not in enabled drivers build config 00:01:41.493 net/iavf: not in enabled drivers build config 00:01:41.493 net/ice: not in enabled drivers build config 00:01:41.493 net/idpf: not in enabled drivers build config 00:01:41.493 net/igc: not in enabled drivers build config 00:01:41.493 net/ionic: not in enabled drivers build config 00:01:41.493 net/ipn3ke: not in enabled drivers build config 00:01:41.493 net/ixgbe: not in enabled drivers build config 00:01:41.493 net/mana: not in enabled drivers build config 00:01:41.493 net/memif: not in enabled drivers build config 00:01:41.493 net/mlx4: not in enabled drivers build config 00:01:41.493 net/mlx5: not in enabled drivers build config 00:01:41.493 net/mvneta: not in enabled drivers build config 00:01:41.493 net/mvpp2: not in enabled drivers build config 00:01:41.493 net/netvsc: not in enabled drivers build config 00:01:41.493 net/nfb: not in enabled drivers build config 00:01:41.493 net/nfp: not in enabled drivers build config 00:01:41.493 net/ngbe: not in enabled drivers build config 00:01:41.493 net/null: not in enabled drivers build config 00:01:41.493 net/octeontx: not in enabled drivers build config 00:01:41.493 net/octeon_ep: not in enabled drivers build config 00:01:41.493 net/pcap: not in enabled drivers build config 00:01:41.493 net/pfe: not in enabled drivers build config 00:01:41.493 net/qede: not in enabled drivers build config 00:01:41.493 net/ring: not in enabled drivers build config 00:01:41.493 net/sfc: not in enabled drivers build config 00:01:41.493 net/softnic: not in enabled drivers build config 00:01:41.493 net/tap: not in enabled drivers build config 00:01:41.493 net/thunderx: not in enabled drivers build config 00:01:41.493 net/txgbe: not in enabled drivers build config 00:01:41.493 net/vdev_netvsc: not in enabled drivers build config 00:01:41.493 net/vhost: not in enabled drivers build config 00:01:41.493 net/virtio: not in enabled drivers build config 00:01:41.493 net/vmxnet3: not in enabled drivers build config 00:01:41.493 raw/cnxk_bphy: not in enabled drivers build config 00:01:41.493 raw/cnxk_gpio: not in enabled drivers build config 00:01:41.493 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:41.493 raw/ifpga: not in enabled drivers build config 00:01:41.493 raw/ntb: not in enabled drivers build config 00:01:41.493 raw/skeleton: not in enabled drivers build config 00:01:41.493 crypto/armv8: not in enabled drivers build config 00:01:41.493 crypto/bcmfs: not in enabled drivers build config 00:01:41.493 crypto/caam_jr: not in enabled drivers build config 00:01:41.493 crypto/ccp: not in enabled drivers build config 00:01:41.493 crypto/cnxk: not in enabled drivers build config 00:01:41.493 crypto/dpaa_sec: not in enabled drivers build config 00:01:41.493 crypto/dpaa2_sec: not in enabled drivers build config 00:01:41.493 crypto/ipsec_mb: not in enabled drivers build config 00:01:41.493 crypto/mlx5: not in enabled drivers build config 00:01:41.493 crypto/mvsam: not in enabled drivers build config 00:01:41.493 crypto/nitrox: not in enabled drivers build config 00:01:41.493 crypto/null: not in enabled drivers build config 00:01:41.493 crypto/octeontx: not in enabled drivers build config 00:01:41.493 crypto/openssl: not in enabled drivers build config 00:01:41.493 crypto/scheduler: not in enabled drivers build config 00:01:41.493 crypto/uadk: not in enabled drivers build config 00:01:41.493 crypto/virtio: not in enabled drivers build config 00:01:41.493 compress/isal: not in enabled drivers build config 00:01:41.493 compress/mlx5: not in enabled drivers build config 00:01:41.493 compress/octeontx: not in enabled drivers build config 00:01:41.493 compress/zlib: not in enabled drivers build config 00:01:41.493 regex/mlx5: not in enabled drivers build config 00:01:41.493 regex/cn9k: not in enabled drivers build config 00:01:41.493 ml/cnxk: not in enabled drivers build config 00:01:41.493 vdpa/ifc: not in enabled drivers build config 00:01:41.493 vdpa/mlx5: not in enabled drivers build config 00:01:41.493 vdpa/nfp: not in enabled drivers build config 00:01:41.493 vdpa/sfc: not in enabled drivers build config 00:01:41.493 event/cnxk: not in enabled drivers build config 00:01:41.493 event/dlb2: not in enabled drivers build config 00:01:41.493 event/dpaa: not in enabled drivers build config 00:01:41.493 event/dpaa2: not in enabled drivers build config 00:01:41.493 event/dsw: not in enabled drivers build config 00:01:41.493 event/opdl: not in enabled drivers build config 00:01:41.493 event/skeleton: not in enabled drivers build config 00:01:41.493 event/sw: not in enabled drivers build config 00:01:41.493 event/octeontx: not in enabled drivers build config 00:01:41.493 baseband/acc: not in enabled drivers build config 00:01:41.493 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:41.493 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:41.493 baseband/la12xx: not in enabled drivers build config 00:01:41.493 baseband/null: not in enabled drivers build config 00:01:41.493 baseband/turbo_sw: not in enabled drivers build config 00:01:41.493 gpu/cuda: not in enabled drivers build config 00:01:41.493 00:01:41.493 00:01:41.493 Build targets in project: 220 00:01:41.493 00:01:41.493 DPDK 23.11.0 00:01:41.493 00:01:41.493 User defined options 00:01:41.493 libdir : lib 00:01:41.493 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.493 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:41.493 c_link_args : 00:01:41.493 enable_docs : false 00:01:41.493 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:41.493 enable_kmods : false 00:01:41.493 machine : native 00:01:41.493 tests : false 00:01:41.493 00:01:41.493 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:41.493 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:41.493 16:53:57 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:41.493 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:41.493 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:41.493 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:41.493 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:41.493 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:41.493 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:41.493 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:41.493 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:41.493 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:41.493 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:41.493 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:41.493 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:41.493 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:41.493 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:41.493 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:41.493 [15/710] Linking static target lib/librte_kvargs.a 00:01:41.493 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:41.493 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:41.752 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:41.752 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:41.752 [20/710] Linking static target lib/librte_log.a 00:01:41.752 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:41.752 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.328 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.328 [24/710] Linking target lib/librte_log.so.24.0 00:01:42.328 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:42.328 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:42.328 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:42.594 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:42.594 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:42.594 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:42.594 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:42.594 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:42.594 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:42.594 [34/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:42.594 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:42.594 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:42.594 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:42.594 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:42.594 [39/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:42.594 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:42.594 [41/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:42.594 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:42.594 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:42.594 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:42.594 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:42.594 [46/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:42.594 [47/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:42.594 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:42.594 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:42.594 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:42.594 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:42.594 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:42.594 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:42.594 [54/710] Linking target lib/librte_kvargs.so.24.0 00:01:42.594 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:42.594 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:42.594 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:42.857 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:42.857 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:42.857 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:42.857 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:42.857 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:42.857 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:42.857 [64/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:42.857 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.123 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:43.123 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:43.123 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:43.123 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:43.123 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:43.123 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:43.123 [72/710] Linking static target lib/librte_pci.a 00:01:43.381 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:43.381 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:43.381 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:43.381 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:43.381 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:43.382 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:43.382 [79/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.382 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:43.643 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:43.643 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:43.643 [83/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:43.643 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:43.643 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:43.643 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:43.643 [87/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:43.643 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:43.643 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:43.643 [90/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:43.643 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:43.643 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:43.643 [93/710] Linking static target lib/librte_ring.a 00:01:43.643 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:43.643 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:43.643 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:43.908 [97/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:43.908 [98/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:43.908 [99/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:43.908 [100/710] Linking static target lib/librte_meter.a 00:01:43.908 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:43.908 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:43.908 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:43.908 [104/710] Linking static target lib/librte_telemetry.a 00:01:43.908 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:43.908 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:43.908 [107/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:43.908 [108/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:44.168 [109/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:44.168 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:44.168 [111/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:44.168 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:44.168 [113/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:44.168 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:44.168 [115/710] Linking static target lib/librte_eal.a 00:01:44.168 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.168 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:44.168 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.168 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:44.434 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:44.434 [121/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:44.434 [122/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:44.434 [123/710] Linking static target lib/librte_net.a 00:01:44.434 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:44.434 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:44.434 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:44.434 [127/710] Linking static target lib/librte_cmdline.a 00:01:44.693 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:44.693 [129/710] Linking static target lib/librte_mempool.a 00:01:44.693 [130/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.693 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:44.693 [132/710] Linking target lib/librte_telemetry.so.24.0 00:01:44.693 [133/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:44.693 [134/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:44.693 [135/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:44.693 [136/710] Linking static target lib/librte_cfgfile.a 00:01:44.693 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:44.693 [138/710] Linking static target lib/librte_metrics.a 00:01:44.693 [139/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.958 [140/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:44.958 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:44.958 [142/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:44.958 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:44.958 [144/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:45.223 [145/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:45.223 [146/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:45.223 [147/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:45.223 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:45.223 [149/710] Linking static target lib/librte_bitratestats.a 00:01:45.223 [150/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.223 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:45.223 [152/710] Linking static target lib/librte_rcu.a 00:01:45.223 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.482 [154/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:45.482 [155/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:45.482 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.482 [157/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:45.482 [158/710] Linking static target lib/librte_timer.a 00:01:45.482 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:45.482 [160/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:45.482 [161/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:45.482 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.482 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.743 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:45.744 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:45.744 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:45.744 [167/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.744 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:45.744 [169/710] Linking static target lib/librte_bbdev.a 00:01:45.744 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.001 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.001 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:46.001 [173/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.001 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.001 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.001 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.001 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:46.001 [178/710] Linking static target lib/librte_compressdev.a 00:01:46.262 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:46.262 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:46.262 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:46.522 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.523 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:46.523 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:46.523 [185/710] Linking static target lib/librte_distributor.a 00:01:46.523 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:46.785 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:46.785 [188/710] Linking static target lib/librte_bpf.a 00:01:46.785 [189/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.785 [190/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.785 [191/710] Linking static target lib/librte_dmadev.a 00:01:46.785 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:46.785 [193/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.043 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:47.043 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:47.043 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:47.043 [197/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.043 [198/710] Linking static target lib/librte_dispatcher.a 00:01:47.043 [199/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:47.043 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:47.043 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:47.043 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:47.043 [203/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:47.302 [204/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:47.302 [205/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:47.302 [206/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:47.302 [207/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:47.302 [208/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:47.302 [209/710] Linking static target lib/librte_gpudev.a 00:01:47.302 [210/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:47.302 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:47.302 [212/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.302 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:47.302 [214/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:47.302 [215/710] Linking static target lib/librte_gro.a 00:01:47.302 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:47.302 [217/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.302 [218/710] Linking static target lib/librte_jobstats.a 00:01:47.565 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:47.565 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:47.565 [221/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.826 [222/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:47.826 [223/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.826 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:47.826 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.826 [226/710] Linking static target lib/librte_latencystats.a 00:01:47.826 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:47.826 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:48.089 [229/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:48.089 [230/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:48.089 [231/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:48.089 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:48.089 [233/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:48.089 [234/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:48.089 [235/710] Linking static target lib/librte_ip_frag.a 00:01:48.089 [236/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:48.352 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.352 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:48.352 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.352 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.352 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:48.352 [242/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:48.620 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.620 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:48.620 [245/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.620 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.620 [247/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:48.879 [248/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:48.879 [249/710] Linking static target lib/librte_gso.a 00:01:48.879 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:48.879 [251/710] Linking static target lib/librte_regexdev.a 00:01:48.879 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.879 [253/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:48.879 [254/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:48.879 [255/710] Linking static target lib/librte_rawdev.a 00:01:48.879 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:48.879 [257/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:49.141 [258/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.141 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.141 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:49.141 [261/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:49.141 [262/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:49.141 [263/710] Linking static target lib/librte_mldev.a 00:01:49.141 [264/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:49.141 [265/710] Linking static target lib/librte_efd.a 00:01:49.141 [266/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:49.141 [267/710] Linking static target lib/librte_pcapng.a 00:01:49.141 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:49.403 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:49.403 [270/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:49.403 [271/710] Linking static target lib/acl/libavx2_tmp.a 00:01:49.403 [272/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:49.403 [273/710] Linking static target lib/librte_stack.a 00:01:49.403 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:49.403 [275/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:49.403 [276/710] Linking static target lib/librte_lpm.a 00:01:49.403 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:49.667 [278/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.667 [279/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:49.667 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:49.667 [281/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.667 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:49.667 [283/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.667 [284/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.667 [285/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:49.667 [286/710] Linking static target lib/librte_hash.a 00:01:49.667 [287/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:49.667 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:49.667 [289/710] Linking static target lib/acl/libavx512_tmp.a 00:01:49.667 [290/710] Linking static target lib/librte_acl.a 00:01:49.930 [291/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:49.930 [292/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:49.930 [293/710] Linking static target lib/librte_reorder.a 00:01:49.930 [294/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.930 [295/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:49.930 [296/710] Linking static target lib/librte_power.a 00:01:49.930 [297/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:49.930 [298/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.192 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.192 [300/710] Linking static target lib/librte_security.a 00:01:50.192 [301/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.192 [302/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:50.192 [303/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.192 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.460 [305/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.460 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.460 [307/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:50.460 [308/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:50.460 [309/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:50.460 [310/710] Linking static target lib/librte_rib.a 00:01:50.460 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:50.460 [312/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.460 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:50.723 [314/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.723 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:50.723 [316/710] Linking static target lib/librte_mbuf.a 00:01:50.723 [317/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:50.723 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.723 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:50.723 [320/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:50.723 [321/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:50.723 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:50.723 [323/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:50.723 [324/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:50.982 [325/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.982 [326/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:50.982 [327/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:51.246 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.246 [329/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.246 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.246 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:51.246 [332/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:51.519 [333/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.519 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:51.519 [335/710] Linking static target lib/librte_eventdev.a 00:01:51.519 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:51.519 [337/710] Linking static target lib/librte_member.a 00:01:51.819 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:51.819 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:51.819 [340/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.819 [341/710] Linking static target lib/librte_cryptodev.a 00:01:51.819 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:51.819 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:51.819 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:52.088 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:52.088 [346/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:52.088 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:52.088 [348/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:52.088 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:52.088 [350/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:52.088 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:52.088 [352/710] Linking static target lib/librte_sched.a 00:01:52.088 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.089 [354/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:52.089 [355/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:52.089 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:52.089 [357/710] Linking static target lib/librte_fib.a 00:01:52.349 [358/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.349 [359/710] Linking static target lib/librte_ethdev.a 00:01:52.349 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:52.349 [361/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:52.349 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:52.349 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:52.349 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:52.610 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:52.610 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:52.610 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.610 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:52.610 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:52.610 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.610 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.873 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:52.873 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:52.873 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:52.873 [375/710] Linking static target lib/librte_pdump.a 00:01:53.137 [376/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:53.137 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:53.137 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:53.137 [379/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:53.137 [380/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:53.399 [381/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:53.399 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:53.399 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:53.399 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:53.399 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:53.399 [386/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.399 [387/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:53.399 [388/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:53.399 [389/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:53.399 [390/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:53.665 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:53.665 [392/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:53.665 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:53.665 [394/710] Linking static target lib/librte_ipsec.a 00:01:53.665 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:53.665 [396/710] Linking static target lib/librte_table.a 00:01:53.925 [397/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:53.925 [398/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.925 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:54.194 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:54.194 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:54.194 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.453 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:54.453 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:54.453 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:54.717 [406/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:54.717 [407/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:54.717 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:54.717 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:54.717 [410/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:54.717 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:54.717 [412/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:54.717 [413/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:54.979 [414/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.979 [415/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:54.979 [416/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:54.979 [417/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.979 [418/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.979 [419/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:54.979 [420/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.239 [421/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.239 [422/710] Linking target lib/librte_eal.so.24.0 00:01:55.239 [423/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.239 [424/710] Linking static target drivers/librte_bus_vdev.a 00:01:55.239 [425/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:55.239 [426/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.239 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:55.501 [428/710] Linking static target lib/librte_port.a 00:01:55.501 [429/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:55.501 [430/710] Linking target lib/librte_ring.so.24.0 00:01:55.501 [431/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:55.501 [432/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:55.501 [433/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:55.501 [434/710] Linking target lib/librte_pci.so.24.0 00:01:55.501 [435/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.501 [436/710] Linking target lib/librte_meter.so.24.0 00:01:55.761 [437/710] Linking target lib/librte_timer.so.24.0 00:01:55.761 [438/710] Linking target lib/librte_acl.so.24.0 00:01:55.761 [439/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:55.761 [440/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:55.761 [441/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:55.761 [442/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:55.761 [443/710] Linking target lib/librte_cfgfile.so.24.0 00:01:55.761 [444/710] Linking target lib/librte_rcu.so.24.0 00:01:55.761 [445/710] Linking target lib/librte_dmadev.so.24.0 00:01:55.761 [446/710] Linking target lib/librte_mempool.so.24.0 00:01:55.761 [447/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:55.761 [448/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:55.761 [449/710] Linking target lib/librte_jobstats.so.24.0 00:01:55.761 [450/710] Linking static target lib/librte_graph.a 00:01:55.761 [451/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:56.026 [452/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.026 [453/710] Linking target lib/librte_rawdev.so.24.0 00:01:56.026 [454/710] Linking static target drivers/librte_bus_pci.a 00:01:56.026 [455/710] Linking target lib/librte_stack.so.24.0 00:01:56.026 [456/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:56.026 [457/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.026 [458/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:56.026 [459/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.026 [460/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:56.026 [461/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.026 [462/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:56.026 [463/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:56.026 [464/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:56.026 [465/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:56.291 [466/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:56.291 [467/710] Linking target lib/librte_mbuf.so.24.0 00:01:56.291 [468/710] Linking target lib/librte_rib.so.24.0 00:01:56.291 [469/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:56.291 [470/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:56.291 [471/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:56.291 [472/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.291 [473/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.291 [474/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.291 [475/710] Linking static target drivers/librte_mempool_ring.a 00:01:56.559 [476/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:56.559 [477/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.559 [478/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:56.559 [479/710] Linking target lib/librte_fib.so.24.0 00:01:56.559 [480/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:56.559 [481/710] Linking target lib/librte_net.so.24.0 00:01:56.559 [482/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:56.559 [483/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:56.559 [484/710] Linking target lib/librte_compressdev.so.24.0 00:01:56.559 [485/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:56.559 [486/710] Linking target lib/librte_bbdev.so.24.0 00:01:56.559 [487/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:56.559 [488/710] Linking target lib/librte_cryptodev.so.24.0 00:01:56.559 [489/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:56.559 [490/710] Linking target lib/librte_distributor.so.24.0 00:01:56.559 [491/710] Linking target lib/librte_gpudev.so.24.0 00:01:56.559 [492/710] Linking target lib/librte_mldev.so.24.0 00:01:56.559 [493/710] Linking target lib/librte_regexdev.so.24.0 00:01:56.559 [494/710] Linking target lib/librte_reorder.so.24.0 00:01:56.822 [495/710] Linking target lib/librte_sched.so.24.0 00:01:56.822 [496/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:56.822 [497/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:56.822 [498/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:56.822 [499/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:56.822 [500/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:56.822 [501/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:56.822 [502/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.822 [503/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:56.822 [504/710] Linking target lib/librte_cmdline.so.24.0 00:01:56.822 [505/710] Linking target lib/librte_hash.so.24.0 00:01:56.822 [506/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:56.822 [507/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:56.822 [508/710] Linking target lib/librte_security.so.24.0 00:01:56.822 [509/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.822 [510/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:57.084 [511/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:57.084 [512/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:57.084 [513/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:57.084 [514/710] Linking target lib/librte_efd.so.24.0 00:01:57.084 [515/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:57.084 [516/710] Linking target lib/librte_lpm.so.24.0 00:01:57.084 [517/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:57.342 [518/710] Linking target lib/librte_member.so.24.0 00:01:57.342 [519/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:57.342 [520/710] Linking target lib/librte_ipsec.so.24.0 00:01:57.342 [521/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:57.342 [522/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:57.342 [523/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:57.342 [524/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:57.607 [525/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:57.607 [526/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:57.607 [527/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:57.607 [528/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:57.607 [529/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:57.607 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:57.866 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:57.866 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:58.128 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:58.128 [534/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:58.128 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:58.128 [536/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:58.128 [537/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:58.394 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:58.394 [539/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:58.394 [540/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:58.394 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:58.653 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:58.653 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:58.653 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:58.914 [545/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:58.914 [546/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:58.914 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:58.914 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:58.914 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:58.914 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:58.914 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:58.914 [552/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:59.176 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:59.176 [554/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:59.176 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:59.176 [556/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:59.176 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:59.436 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:59.436 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:59.700 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:59.968 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:59.968 [562/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:59.968 [563/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:59.968 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:00.231 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:00.231 [566/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.231 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:00.231 [568/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:00.231 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:00.231 [570/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:00.231 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:00.490 [572/710] Linking target lib/librte_ethdev.so.24.0 00:02:00.490 [573/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:00.490 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:00.490 [575/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:00.490 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:00.490 [577/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.490 [578/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:00.490 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:00.754 [580/710] Linking target lib/librte_metrics.so.24.0 00:02:00.754 [581/710] Linking target lib/librte_bpf.so.24.0 00:02:00.754 [582/710] Linking target lib/librte_eventdev.so.24.0 00:02:00.754 [583/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:00.754 [584/710] Linking target lib/librte_gro.so.24.0 00:02:01.014 [585/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:01.014 [586/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:01.014 [587/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:01.014 [588/710] Linking target lib/librte_gso.so.24.0 00:02:01.014 [589/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:01.014 [590/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:01.014 [591/710] Linking target lib/librte_bitratestats.so.24.0 00:02:01.014 [592/710] Linking target lib/librte_ip_frag.so.24.0 00:02:01.014 [593/710] Linking target lib/librte_latencystats.so.24.0 00:02:01.014 [594/710] Linking target lib/librte_pcapng.so.24.0 00:02:01.014 [595/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:01.014 [596/710] Linking target lib/librte_power.so.24.0 00:02:01.014 [597/710] Linking target lib/librte_dispatcher.so.24.0 00:02:01.275 [598/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:01.275 [599/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:01.275 [600/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:01.275 [601/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:01.275 [602/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:01.275 [603/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:01.275 [604/710] Linking static target lib/librte_pdcp.a 00:02:01.275 [605/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:01.275 [606/710] Linking target lib/librte_pdump.so.24.0 00:02:01.276 [607/710] Linking target lib/librte_port.so.24.0 00:02:01.276 [608/710] Linking target lib/librte_graph.so.24.0 00:02:01.276 [609/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:01.276 [610/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:01.543 [611/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:01.543 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:01.543 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:01.543 [614/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:01.543 [615/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:01.543 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:01.802 [617/710] Linking target lib/librte_table.so.24.0 00:02:01.802 [618/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:01.802 [619/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.802 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:01.802 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:01.802 [622/710] Linking target lib/librte_pdcp.so.24.0 00:02:01.802 [623/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:01.802 [624/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:02.066 [625/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:02.066 [626/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:02.066 [627/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:02.066 [628/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:02.066 [629/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:02.328 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:02.586 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:02.586 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:02.586 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:02.586 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:02.586 [635/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:02.844 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:02.844 [637/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:02.844 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:02.844 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:02.844 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:02.844 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:03.102 [642/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:03.102 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:03.102 [644/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:03.102 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:03.102 [646/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:03.360 [647/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:03.360 [648/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:03.360 [649/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:03.360 [650/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:03.360 [651/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:03.621 [652/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:03.621 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:03.622 [654/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:03.881 [655/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:03.881 [656/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:03.881 [657/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:03.881 [658/710] Linking static target drivers/librte_net_i40e.a 00:02:03.881 [659/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:03.881 [660/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:03.881 [661/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:04.447 [662/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:04.447 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:04.447 [664/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.447 [665/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:04.705 [666/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:04.705 [667/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:04.705 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:04.964 [669/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:05.222 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:05.497 [671/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:05.497 [672/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:05.497 [673/710] Linking static target lib/librte_node.a 00:02:05.763 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:05.763 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.763 [676/710] Linking target lib/librte_node.so.24.0 00:02:07.137 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:07.137 [678/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:07.396 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:08.774 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:09.343 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:15.898 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.962 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.962 [684/710] Linking static target lib/librte_vhost.a 00:02:47.962 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.962 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:56.129 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:56.129 [688/710] Linking static target lib/librte_pipeline.a 00:02:56.387 [689/710] Linking target app/dpdk-proc-info 00:02:56.387 [690/710] Linking target app/dpdk-test-cmdline 00:02:56.387 [691/710] Linking target app/dpdk-pdump 00:02:56.387 [692/710] Linking target app/dpdk-dumpcap 00:02:56.387 [693/710] Linking target app/dpdk-test-acl 00:02:56.387 [694/710] Linking target app/dpdk-test-gpudev 00:02:56.387 [695/710] Linking target app/dpdk-test-fib 00:02:56.387 [696/710] Linking target app/dpdk-test-dma-perf 00:02:56.387 [697/710] Linking target app/dpdk-test-bbdev 00:02:56.387 [698/710] Linking target app/dpdk-test-regex 00:02:56.387 [699/710] Linking target app/dpdk-test-pipeline 00:02:56.387 [700/710] Linking target app/dpdk-test-flow-perf 00:02:56.387 [701/710] Linking target app/dpdk-test-sad 00:02:56.387 [702/710] Linking target app/dpdk-graph 00:02:56.387 [703/710] Linking target app/dpdk-test-security-perf 00:02:56.387 [704/710] Linking target app/dpdk-test-mldev 00:02:56.387 [705/710] Linking target app/dpdk-test-crypto-perf 00:02:56.387 [706/710] Linking target app/dpdk-test-compress-perf 00:02:56.387 [707/710] Linking target app/dpdk-test-eventdev 00:02:56.387 [708/710] Linking target app/dpdk-testpmd 00:02:58.291 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.291 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:58.291 16:55:14 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:58.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:58.548 [0/1] Installing files. 00:02:58.810 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.810 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:58.811 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.812 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.813 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:58.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:58.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:58.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:58.816 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:58.816 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.385 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.386 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.386 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.386 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.386 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.386 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.386 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.387 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.388 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:59.389 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:59.389 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:59.390 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:59.390 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:59.390 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:59.390 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:59.390 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:59.390 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:59.390 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:59.390 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:59.390 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:59.390 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:59.390 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:59.390 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:59.390 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:59.390 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:59.648 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:59.648 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:59.648 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:59.648 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:59.648 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:59.648 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:59.648 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:59.648 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:59.648 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:59.648 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:59.648 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:59.648 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:59.648 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:59.648 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:59.648 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:59.648 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:59.648 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:59.648 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:59.648 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:59.648 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:59.648 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:59.648 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:59.648 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:59.648 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:59.648 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:59.648 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:59.648 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:59.648 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:59.648 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:59.648 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:59.648 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:59.648 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:59.649 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:59.649 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:59.649 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:59.649 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:59.649 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:59.649 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:59.649 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:59.649 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:59.649 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:59.649 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:59.649 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:59.649 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:59.649 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:59.649 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:59.649 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:59.649 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:59.649 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:59.649 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:59.649 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:59.649 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:59.649 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:59.649 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:59.649 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:59.649 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:59.649 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:59.649 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:59.649 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:59.649 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:59.649 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:59.649 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:59.649 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:59.649 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:59.649 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:59.649 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:59.649 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:59.649 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:59.649 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:59.649 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:59.649 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:59.649 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:59.649 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:59.649 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:59.649 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:59.649 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:59.649 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:59.649 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:59.649 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:59.649 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:59.649 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:59.649 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:59.649 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:59.649 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:59.649 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:59.649 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:59.649 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:59.649 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:59.649 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:59.649 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:59.649 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:59.649 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:59.649 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:59.649 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:59.649 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:59.649 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:59.649 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:59.649 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:59.649 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:59.649 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:59.649 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:59.649 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:59.649 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:59.649 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:59.649 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:59.649 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:59.649 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:59.649 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:59.649 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:59.649 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:59.649 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:59.649 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:59.649 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:59.649 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:59.649 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:59.649 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:59.649 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:59.649 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:59.649 16:55:15 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:59.649 16:55:15 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:59.649 16:55:15 -- common/autobuild_common.sh@200 -- $ cat 00:02:59.649 16:55:15 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.649 00:02:59.649 real 1m24.140s 00:02:59.649 user 17m54.108s 00:02:59.649 sys 2m5.684s 00:02:59.649 16:55:15 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:59.649 16:55:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.649 ************************************ 00:02:59.649 END TEST build_native_dpdk 00:02:59.649 ************************************ 00:02:59.649 16:55:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:59.649 16:55:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:59.649 16:55:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:59.649 16:55:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:59.649 16:55:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:59.649 16:55:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:59.649 16:55:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:59.649 16:55:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:59.649 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:59.649 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:59.649 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:59.907 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:00.165 Using 'verbs' RDMA provider 00:03:10.406 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:20.380 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:20.380 Creating mk/config.mk...done. 00:03:20.380 Creating mk/cc.flags.mk...done. 00:03:20.380 Type 'make' to build. 00:03:20.380 16:55:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:20.380 16:55:35 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:20.380 16:55:35 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:20.380 16:55:35 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.380 ************************************ 00:03:20.380 START TEST make 00:03:20.380 ************************************ 00:03:20.380 16:55:35 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:20.380 make[1]: Nothing to be done for 'all'. 00:03:20.640 The Meson build system 00:03:20.640 Version: 1.3.1 00:03:20.640 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:20.640 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:20.640 Build type: native build 00:03:20.640 Project name: libvfio-user 00:03:20.640 Project version: 0.0.1 00:03:20.640 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:20.640 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:20.640 Host machine cpu family: x86_64 00:03:20.640 Host machine cpu: x86_64 00:03:20.640 Run-time dependency threads found: YES 00:03:20.640 Library dl found: YES 00:03:20.640 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:20.640 Run-time dependency json-c found: YES 0.17 00:03:20.640 Run-time dependency cmocka found: YES 1.1.7 00:03:20.640 Program pytest-3 found: NO 00:03:20.640 Program flake8 found: NO 00:03:20.640 Program misspell-fixer found: NO 00:03:20.640 Program restructuredtext-lint found: NO 00:03:20.640 Program valgrind found: YES (/usr/bin/valgrind) 00:03:20.640 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:20.640 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:20.640 Compiler for C supports arguments -Wwrite-strings: YES 00:03:20.640 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.640 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:20.640 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:20.640 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:20.640 Build targets in project: 8 00:03:20.640 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:20.640 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:20.640 00:03:20.640 libvfio-user 0.0.1 00:03:20.640 00:03:20.640 User defined options 00:03:20.640 buildtype : debug 00:03:20.640 default_library: shared 00:03:20.640 libdir : /usr/local/lib 00:03:20.640 00:03:20.640 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:21.655 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:21.655 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:21.655 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:21.655 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:21.655 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:21.655 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:21.655 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:21.655 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:21.655 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:21.655 [9/37] Compiling C object samples/null.p/null.c.o 00:03:21.655 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:21.655 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:21.914 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:21.914 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:21.914 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:21.914 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:21.914 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:21.914 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:21.914 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:21.914 [19/37] Compiling C object samples/server.p/server.c.o 00:03:21.914 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:21.914 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:21.914 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:21.914 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:21.914 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:21.914 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:21.914 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:21.914 [27/37] Compiling C object samples/client.p/client.c.o 00:03:22.174 [28/37] Linking target samples/client 00:03:22.174 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:22.174 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:22.174 [31/37] Linking target test/unit_tests 00:03:22.174 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:22.435 [33/37] Linking target samples/null 00:03:22.435 [34/37] Linking target samples/server 00:03:22.435 [35/37] Linking target samples/gpio-pci-idio-16 00:03:22.435 [36/37] Linking target samples/lspci 00:03:22.435 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:22.435 INFO: autodetecting backend as ninja 00:03:22.435 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:22.435 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:23.009 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:23.009 ninja: no work to do. 00:03:35.202 CC lib/log/log.o 00:03:35.202 CC lib/log/log_flags.o 00:03:35.202 CC lib/log/log_deprecated.o 00:03:35.202 CC lib/ut_mock/mock.o 00:03:35.202 CC lib/ut/ut.o 00:03:35.202 LIB libspdk_ut_mock.a 00:03:35.202 SO libspdk_ut_mock.so.5.0 00:03:35.202 LIB libspdk_log.a 00:03:35.202 LIB libspdk_ut.a 00:03:35.202 SO libspdk_ut.so.1.0 00:03:35.202 SO libspdk_log.so.6.1 00:03:35.202 SYMLINK libspdk_ut_mock.so 00:03:35.202 SYMLINK libspdk_ut.so 00:03:35.202 SYMLINK libspdk_log.so 00:03:35.202 CC lib/ioat/ioat.o 00:03:35.202 CC lib/dma/dma.o 00:03:35.202 CXX lib/trace_parser/trace.o 00:03:35.202 CC lib/util/base64.o 00:03:35.202 CC lib/util/bit_array.o 00:03:35.202 CC lib/util/cpuset.o 00:03:35.202 CC lib/util/crc16.o 00:03:35.202 CC lib/util/crc32.o 00:03:35.202 CC lib/util/crc32c.o 00:03:35.202 CC lib/util/crc32_ieee.o 00:03:35.202 CC lib/util/crc64.o 00:03:35.202 CC lib/util/dif.o 00:03:35.202 CC lib/util/fd.o 00:03:35.202 CC lib/util/file.o 00:03:35.202 CC lib/util/hexlify.o 00:03:35.202 CC lib/util/iov.o 00:03:35.202 CC lib/util/math.o 00:03:35.202 CC lib/util/pipe.o 00:03:35.202 CC lib/util/strerror_tls.o 00:03:35.202 CC lib/util/string.o 00:03:35.203 CC lib/util/uuid.o 00:03:35.203 CC lib/util/fd_group.o 00:03:35.203 CC lib/util/xor.o 00:03:35.203 CC lib/util/zipf.o 00:03:35.203 CC lib/vfio_user/host/vfio_user_pci.o 00:03:35.203 CC lib/vfio_user/host/vfio_user.o 00:03:35.203 LIB libspdk_dma.a 00:03:35.203 SO libspdk_dma.so.3.0 00:03:35.203 SYMLINK libspdk_dma.so 00:03:35.203 LIB libspdk_ioat.a 00:03:35.203 SO libspdk_ioat.so.6.0 00:03:35.203 SYMLINK libspdk_ioat.so 00:03:35.203 LIB libspdk_vfio_user.a 00:03:35.203 SO libspdk_vfio_user.so.4.0 00:03:35.203 SYMLINK libspdk_vfio_user.so 00:03:35.203 LIB libspdk_util.a 00:03:35.203 SO libspdk_util.so.8.0 00:03:35.461 SYMLINK libspdk_util.so 00:03:35.461 LIB libspdk_trace_parser.a 00:03:35.461 SO libspdk_trace_parser.so.4.0 00:03:35.461 CC lib/rdma/common.o 00:03:35.461 CC lib/conf/conf.o 00:03:35.461 CC lib/idxd/idxd.o 00:03:35.461 CC lib/vmd/vmd.o 00:03:35.461 CC lib/env_dpdk/env.o 00:03:35.461 CC lib/json/json_parse.o 00:03:35.461 CC lib/idxd/idxd_user.o 00:03:35.461 CC lib/rdma/rdma_verbs.o 00:03:35.461 CC lib/env_dpdk/memory.o 00:03:35.461 CC lib/vmd/led.o 00:03:35.461 CC lib/idxd/idxd_kernel.o 00:03:35.461 CC lib/json/json_util.o 00:03:35.461 CC lib/env_dpdk/pci.o 00:03:35.461 CC lib/json/json_write.o 00:03:35.461 CC lib/env_dpdk/init.o 00:03:35.461 CC lib/env_dpdk/threads.o 00:03:35.461 CC lib/env_dpdk/pci_ioat.o 00:03:35.461 CC lib/env_dpdk/pci_virtio.o 00:03:35.461 CC lib/env_dpdk/pci_vmd.o 00:03:35.461 CC lib/env_dpdk/pci_idxd.o 00:03:35.461 CC lib/env_dpdk/pci_event.o 00:03:35.461 CC lib/env_dpdk/sigbus_handler.o 00:03:35.461 CC lib/env_dpdk/pci_dpdk.o 00:03:35.461 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.461 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.461 SYMLINK libspdk_trace_parser.so 00:03:35.719 LIB libspdk_rdma.a 00:03:35.719 LIB libspdk_json.a 00:03:35.719 LIB libspdk_conf.a 00:03:35.719 SO libspdk_rdma.so.5.0 00:03:35.719 SO libspdk_json.so.5.1 00:03:35.977 SO libspdk_conf.so.5.0 00:03:35.977 SYMLINK libspdk_rdma.so 00:03:35.977 SYMLINK libspdk_conf.so 00:03:35.977 SYMLINK libspdk_json.so 00:03:35.977 CC lib/jsonrpc/jsonrpc_server.o 00:03:35.977 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:35.977 CC lib/jsonrpc/jsonrpc_client.o 00:03:35.977 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:36.234 LIB libspdk_idxd.a 00:03:36.234 LIB libspdk_vmd.a 00:03:36.234 SO libspdk_idxd.so.11.0 00:03:36.234 SO libspdk_vmd.so.5.0 00:03:36.234 SYMLINK libspdk_idxd.so 00:03:36.234 SYMLINK libspdk_vmd.so 00:03:36.234 LIB libspdk_jsonrpc.a 00:03:36.234 SO libspdk_jsonrpc.so.5.1 00:03:36.493 SYMLINK libspdk_jsonrpc.so 00:03:36.493 CC lib/rpc/rpc.o 00:03:36.750 LIB libspdk_rpc.a 00:03:36.750 SO libspdk_rpc.so.5.0 00:03:36.750 SYMLINK libspdk_rpc.so 00:03:36.750 CC lib/trace/trace.o 00:03:36.750 CC lib/trace/trace_flags.o 00:03:36.750 CC lib/trace/trace_rpc.o 00:03:36.750 CC lib/notify/notify.o 00:03:36.750 CC lib/sock/sock.o 00:03:36.750 CC lib/notify/notify_rpc.o 00:03:36.750 CC lib/sock/sock_rpc.o 00:03:37.015 LIB libspdk_notify.a 00:03:37.015 SO libspdk_notify.so.5.0 00:03:37.015 LIB libspdk_trace.a 00:03:37.015 SYMLINK libspdk_notify.so 00:03:37.015 SO libspdk_trace.so.9.0 00:03:37.278 SYMLINK libspdk_trace.so 00:03:37.278 LIB libspdk_sock.a 00:03:37.278 SO libspdk_sock.so.8.0 00:03:37.278 CC lib/thread/thread.o 00:03:37.278 CC lib/thread/iobuf.o 00:03:37.278 SYMLINK libspdk_sock.so 00:03:37.278 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.278 CC lib/nvme/nvme_ctrlr.o 00:03:37.278 CC lib/nvme/nvme_fabric.o 00:03:37.278 CC lib/nvme/nvme_ns_cmd.o 00:03:37.278 CC lib/nvme/nvme_ns.o 00:03:37.278 CC lib/nvme/nvme_pcie_common.o 00:03:37.278 CC lib/nvme/nvme_pcie.o 00:03:37.278 CC lib/nvme/nvme_qpair.o 00:03:37.278 CC lib/nvme/nvme.o 00:03:37.278 CC lib/nvme/nvme_quirks.o 00:03:37.278 CC lib/nvme/nvme_transport.o 00:03:37.278 CC lib/nvme/nvme_discovery.o 00:03:37.278 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.278 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.278 CC lib/nvme/nvme_tcp.o 00:03:37.278 CC lib/nvme/nvme_opal.o 00:03:37.278 CC lib/nvme/nvme_io_msg.o 00:03:37.278 CC lib/nvme/nvme_poll_group.o 00:03:37.278 CC lib/nvme/nvme_zns.o 00:03:37.278 CC lib/nvme/nvme_cuse.o 00:03:37.278 CC lib/nvme/nvme_vfio_user.o 00:03:37.278 CC lib/nvme/nvme_rdma.o 00:03:37.540 LIB libspdk_env_dpdk.a 00:03:37.540 SO libspdk_env_dpdk.so.13.0 00:03:37.797 SYMLINK libspdk_env_dpdk.so 00:03:38.731 LIB libspdk_thread.a 00:03:38.731 SO libspdk_thread.so.9.0 00:03:38.989 SYMLINK libspdk_thread.so 00:03:38.989 CC lib/virtio/virtio.o 00:03:38.989 CC lib/init/json_config.o 00:03:38.989 CC lib/virtio/virtio_vhost_user.o 00:03:38.989 CC lib/accel/accel.o 00:03:38.989 CC lib/vfu_tgt/tgt_endpoint.o 00:03:38.989 CC lib/virtio/virtio_vfio_user.o 00:03:38.989 CC lib/vfu_tgt/tgt_rpc.o 00:03:38.989 CC lib/init/subsystem.o 00:03:38.989 CC lib/blob/blobstore.o 00:03:38.989 CC lib/accel/accel_rpc.o 00:03:38.989 CC lib/virtio/virtio_pci.o 00:03:38.989 CC lib/init/subsystem_rpc.o 00:03:38.989 CC lib/accel/accel_sw.o 00:03:38.989 CC lib/blob/request.o 00:03:38.989 CC lib/blob/zeroes.o 00:03:38.989 CC lib/init/rpc.o 00:03:38.989 CC lib/blob/blob_bs_dev.o 00:03:39.247 LIB libspdk_init.a 00:03:39.247 SO libspdk_init.so.4.0 00:03:39.247 SYMLINK libspdk_init.so 00:03:39.247 LIB libspdk_virtio.a 00:03:39.247 LIB libspdk_vfu_tgt.a 00:03:39.505 SO libspdk_vfu_tgt.so.2.0 00:03:39.505 SO libspdk_virtio.so.6.0 00:03:39.505 SYMLINK libspdk_vfu_tgt.so 00:03:39.505 SYMLINK libspdk_virtio.so 00:03:39.505 CC lib/event/app.o 00:03:39.505 CC lib/event/reactor.o 00:03:39.505 CC lib/event/log_rpc.o 00:03:39.505 CC lib/event/app_rpc.o 00:03:39.505 CC lib/event/scheduler_static.o 00:03:39.763 LIB libspdk_nvme.a 00:03:39.763 SO libspdk_nvme.so.12.0 00:03:39.763 LIB libspdk_event.a 00:03:40.021 SO libspdk_event.so.12.0 00:03:40.021 SYMLINK libspdk_event.so 00:03:40.021 SYMLINK libspdk_nvme.so 00:03:40.021 LIB libspdk_accel.a 00:03:40.021 SO libspdk_accel.so.14.0 00:03:40.279 SYMLINK libspdk_accel.so 00:03:40.279 CC lib/bdev/bdev.o 00:03:40.279 CC lib/bdev/bdev_rpc.o 00:03:40.279 CC lib/bdev/bdev_zone.o 00:03:40.279 CC lib/bdev/part.o 00:03:40.279 CC lib/bdev/scsi_nvme.o 00:03:41.652 LIB libspdk_blob.a 00:03:41.652 SO libspdk_blob.so.10.1 00:03:41.910 SYMLINK libspdk_blob.so 00:03:41.910 CC lib/blobfs/blobfs.o 00:03:41.910 CC lib/blobfs/tree.o 00:03:41.910 CC lib/lvol/lvol.o 00:03:42.853 LIB libspdk_blobfs.a 00:03:42.853 SO libspdk_blobfs.so.9.0 00:03:42.853 LIB libspdk_bdev.a 00:03:42.853 LIB libspdk_lvol.a 00:03:42.853 SO libspdk_bdev.so.14.0 00:03:42.853 SYMLINK libspdk_blobfs.so 00:03:42.853 SO libspdk_lvol.so.9.1 00:03:42.853 SYMLINK libspdk_lvol.so 00:03:42.853 SYMLINK libspdk_bdev.so 00:03:42.853 CC lib/scsi/dev.o 00:03:42.853 CC lib/ublk/ublk.o 00:03:42.853 CC lib/ublk/ublk_rpc.o 00:03:42.853 CC lib/ftl/ftl_core.o 00:03:42.853 CC lib/scsi/lun.o 00:03:42.854 CC lib/ftl/ftl_init.o 00:03:42.854 CC lib/scsi/port.o 00:03:42.854 CC lib/ftl/ftl_layout.o 00:03:42.854 CC lib/ftl/ftl_debug.o 00:03:42.854 CC lib/scsi/scsi.o 00:03:42.854 CC lib/nbd/nbd.o 00:03:42.854 CC lib/scsi/scsi_bdev.o 00:03:42.854 CC lib/ftl/ftl_io.o 00:03:42.854 CC lib/nbd/nbd_rpc.o 00:03:42.854 CC lib/scsi/scsi_pr.o 00:03:42.854 CC lib/ftl/ftl_sb.o 00:03:42.854 CC lib/ftl/ftl_l2p.o 00:03:42.854 CC lib/scsi/scsi_rpc.o 00:03:42.854 CC lib/nvmf/ctrlr.o 00:03:42.854 CC lib/scsi/task.o 00:03:42.854 CC lib/ftl/ftl_l2p_flat.o 00:03:42.854 CC lib/nvmf/ctrlr_discovery.o 00:03:42.854 CC lib/ftl/ftl_nv_cache.o 00:03:42.854 CC lib/ftl/ftl_band.o 00:03:42.854 CC lib/nvmf/ctrlr_bdev.o 00:03:42.854 CC lib/ftl/ftl_band_ops.o 00:03:42.854 CC lib/ftl/ftl_writer.o 00:03:42.854 CC lib/nvmf/subsystem.o 00:03:42.854 CC lib/ftl/ftl_rq.o 00:03:42.854 CC lib/nvmf/nvmf.o 00:03:42.854 CC lib/nvmf/nvmf_rpc.o 00:03:42.854 CC lib/ftl/ftl_reloc.o 00:03:42.854 CC lib/nvmf/transport.o 00:03:42.854 CC lib/ftl/ftl_l2p_cache.o 00:03:42.854 CC lib/nvmf/tcp.o 00:03:42.854 CC lib/ftl/ftl_p2l.o 00:03:42.854 CC lib/nvmf/vfio_user.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt.o 00:03:42.854 CC lib/nvmf/rdma.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:42.854 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:43.424 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:43.424 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:43.424 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:43.424 CC lib/ftl/utils/ftl_conf.o 00:03:43.424 CC lib/ftl/utils/ftl_md.o 00:03:43.424 CC lib/ftl/utils/ftl_mempool.o 00:03:43.424 CC lib/ftl/utils/ftl_bitmap.o 00:03:43.424 CC lib/ftl/utils/ftl_property.o 00:03:43.424 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:43.424 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:43.424 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:43.425 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:43.425 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:43.425 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:43.425 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:43.425 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:43.425 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:43.425 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:43.425 CC lib/ftl/base/ftl_base_dev.o 00:03:43.425 CC lib/ftl/base/ftl_base_bdev.o 00:03:43.425 CC lib/ftl/ftl_trace.o 00:03:43.685 LIB libspdk_nbd.a 00:03:43.685 SO libspdk_nbd.so.6.0 00:03:44.004 SYMLINK libspdk_nbd.so 00:03:44.004 LIB libspdk_scsi.a 00:03:44.004 SO libspdk_scsi.so.8.0 00:03:44.004 LIB libspdk_ublk.a 00:03:44.004 SYMLINK libspdk_scsi.so 00:03:44.004 SO libspdk_ublk.so.2.0 00:03:44.004 SYMLINK libspdk_ublk.so 00:03:44.004 CC lib/iscsi/conn.o 00:03:44.004 CC lib/vhost/vhost.o 00:03:44.004 CC lib/vhost/vhost_rpc.o 00:03:44.004 CC lib/iscsi/init_grp.o 00:03:44.004 CC lib/vhost/vhost_scsi.o 00:03:44.004 CC lib/iscsi/iscsi.o 00:03:44.004 CC lib/vhost/vhost_blk.o 00:03:44.004 CC lib/iscsi/md5.o 00:03:44.004 CC lib/vhost/rte_vhost_user.o 00:03:44.004 CC lib/iscsi/param.o 00:03:44.004 CC lib/iscsi/portal_grp.o 00:03:44.004 CC lib/iscsi/tgt_node.o 00:03:44.004 CC lib/iscsi/iscsi_subsystem.o 00:03:44.004 CC lib/iscsi/iscsi_rpc.o 00:03:44.004 CC lib/iscsi/task.o 00:03:44.262 LIB libspdk_ftl.a 00:03:44.520 SO libspdk_ftl.so.8.0 00:03:44.777 SYMLINK libspdk_ftl.so 00:03:45.341 LIB libspdk_vhost.a 00:03:45.341 SO libspdk_vhost.so.7.1 00:03:45.341 SYMLINK libspdk_vhost.so 00:03:45.341 LIB libspdk_nvmf.a 00:03:45.598 SO libspdk_nvmf.so.17.0 00:03:45.598 LIB libspdk_iscsi.a 00:03:45.598 SO libspdk_iscsi.so.7.0 00:03:45.598 SYMLINK libspdk_nvmf.so 00:03:45.599 SYMLINK libspdk_iscsi.so 00:03:45.856 CC module/env_dpdk/env_dpdk_rpc.o 00:03:45.856 CC module/vfu_device/vfu_virtio.o 00:03:45.856 CC module/vfu_device/vfu_virtio_blk.o 00:03:45.856 CC module/vfu_device/vfu_virtio_scsi.o 00:03:45.856 CC module/vfu_device/vfu_virtio_rpc.o 00:03:45.856 CC module/accel/ioat/accel_ioat.o 00:03:45.856 CC module/accel/ioat/accel_ioat_rpc.o 00:03:45.856 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:45.856 CC module/accel/iaa/accel_iaa.o 00:03:45.856 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:45.856 CC module/sock/posix/posix.o 00:03:45.856 CC module/accel/error/accel_error.o 00:03:45.856 CC module/accel/dsa/accel_dsa.o 00:03:45.856 CC module/blob/bdev/blob_bdev.o 00:03:45.856 CC module/accel/error/accel_error_rpc.o 00:03:45.856 CC module/accel/iaa/accel_iaa_rpc.o 00:03:45.856 CC module/accel/dsa/accel_dsa_rpc.o 00:03:45.856 CC module/scheduler/gscheduler/gscheduler.o 00:03:46.114 LIB libspdk_env_dpdk_rpc.a 00:03:46.114 SO libspdk_env_dpdk_rpc.so.5.0 00:03:46.114 SYMLINK libspdk_env_dpdk_rpc.so 00:03:46.114 LIB libspdk_scheduler_dpdk_governor.a 00:03:46.114 LIB libspdk_scheduler_gscheduler.a 00:03:46.114 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:46.114 SO libspdk_scheduler_gscheduler.so.3.0 00:03:46.114 LIB libspdk_accel_ioat.a 00:03:46.114 LIB libspdk_accel_error.a 00:03:46.114 LIB libspdk_scheduler_dynamic.a 00:03:46.114 LIB libspdk_accel_iaa.a 00:03:46.114 SO libspdk_scheduler_dynamic.so.3.0 00:03:46.114 SO libspdk_accel_ioat.so.5.0 00:03:46.114 SO libspdk_accel_error.so.1.0 00:03:46.114 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:46.114 SYMLINK libspdk_scheduler_gscheduler.so 00:03:46.114 SO libspdk_accel_iaa.so.2.0 00:03:46.114 LIB libspdk_accel_dsa.a 00:03:46.114 SYMLINK libspdk_scheduler_dynamic.so 00:03:46.114 SYMLINK libspdk_accel_ioat.so 00:03:46.114 SYMLINK libspdk_accel_error.so 00:03:46.114 LIB libspdk_blob_bdev.a 00:03:46.114 SO libspdk_accel_dsa.so.4.0 00:03:46.377 SYMLINK libspdk_accel_iaa.so 00:03:46.377 SO libspdk_blob_bdev.so.10.1 00:03:46.377 SYMLINK libspdk_accel_dsa.so 00:03:46.377 SYMLINK libspdk_blob_bdev.so 00:03:46.377 CC module/bdev/delay/vbdev_delay.o 00:03:46.377 CC module/bdev/null/bdev_null.o 00:03:46.377 CC module/bdev/lvol/vbdev_lvol.o 00:03:46.377 CC module/blobfs/bdev/blobfs_bdev.o 00:03:46.377 CC module/bdev/null/bdev_null_rpc.o 00:03:46.377 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:46.378 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:46.378 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:46.378 CC module/bdev/gpt/gpt.o 00:03:46.378 CC module/bdev/error/vbdev_error.o 00:03:46.378 CC module/bdev/error/vbdev_error_rpc.o 00:03:46.378 CC module/bdev/gpt/vbdev_gpt.o 00:03:46.378 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:46.378 CC module/bdev/iscsi/bdev_iscsi.o 00:03:46.378 CC module/bdev/passthru/vbdev_passthru.o 00:03:46.378 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:46.378 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:46.378 CC module/bdev/raid/bdev_raid.o 00:03:46.378 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:46.378 CC module/bdev/nvme/bdev_nvme.o 00:03:46.378 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:46.378 CC module/bdev/malloc/bdev_malloc.o 00:03:46.378 CC module/bdev/raid/bdev_raid_rpc.o 00:03:46.378 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:46.378 CC module/bdev/raid/bdev_raid_sb.o 00:03:46.378 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:46.378 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:46.378 CC module/bdev/ftl/bdev_ftl.o 00:03:46.378 CC module/bdev/split/vbdev_split.o 00:03:46.378 CC module/bdev/nvme/nvme_rpc.o 00:03:46.378 CC module/bdev/split/vbdev_split_rpc.o 00:03:46.378 CC module/bdev/raid/raid0.o 00:03:46.378 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:46.378 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:46.378 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.378 CC module/bdev/raid/raid1.o 00:03:46.378 CC module/bdev/nvme/vbdev_opal.o 00:03:46.378 CC module/bdev/aio/bdev_aio.o 00:03:46.378 CC module/bdev/raid/concat.o 00:03:46.378 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:46.378 CC module/bdev/aio/bdev_aio_rpc.o 00:03:46.378 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:46.638 LIB libspdk_vfu_device.a 00:03:46.638 SO libspdk_vfu_device.so.2.0 00:03:46.638 LIB libspdk_sock_posix.a 00:03:46.638 SO libspdk_sock_posix.so.5.0 00:03:46.930 SYMLINK libspdk_vfu_device.so 00:03:46.930 SYMLINK libspdk_sock_posix.so 00:03:46.930 LIB libspdk_blobfs_bdev.a 00:03:46.930 SO libspdk_blobfs_bdev.so.5.0 00:03:46.930 LIB libspdk_bdev_ftl.a 00:03:46.930 SO libspdk_bdev_ftl.so.5.0 00:03:46.930 LIB libspdk_bdev_split.a 00:03:46.930 SYMLINK libspdk_blobfs_bdev.so 00:03:46.930 LIB libspdk_bdev_null.a 00:03:46.930 SO libspdk_bdev_split.so.5.0 00:03:46.930 LIB libspdk_bdev_passthru.a 00:03:46.930 SYMLINK libspdk_bdev_ftl.so 00:03:46.930 LIB libspdk_bdev_gpt.a 00:03:46.930 LIB libspdk_bdev_error.a 00:03:46.930 LIB libspdk_bdev_aio.a 00:03:46.930 SO libspdk_bdev_null.so.5.0 00:03:46.930 SO libspdk_bdev_passthru.so.5.0 00:03:46.930 SO libspdk_bdev_aio.so.5.0 00:03:46.930 SO libspdk_bdev_gpt.so.5.0 00:03:46.930 SO libspdk_bdev_error.so.5.0 00:03:46.930 SYMLINK libspdk_bdev_split.so 00:03:46.930 LIB libspdk_bdev_delay.a 00:03:46.930 LIB libspdk_bdev_zone_block.a 00:03:46.930 LIB libspdk_bdev_malloc.a 00:03:46.930 SYMLINK libspdk_bdev_null.so 00:03:47.187 SYMLINK libspdk_bdev_passthru.so 00:03:47.187 LIB libspdk_bdev_iscsi.a 00:03:47.187 SO libspdk_bdev_delay.so.5.0 00:03:47.187 SYMLINK libspdk_bdev_gpt.so 00:03:47.187 SO libspdk_bdev_zone_block.so.5.0 00:03:47.187 SYMLINK libspdk_bdev_aio.so 00:03:47.187 SYMLINK libspdk_bdev_error.so 00:03:47.187 SO libspdk_bdev_malloc.so.5.0 00:03:47.187 SO libspdk_bdev_iscsi.so.5.0 00:03:47.187 SYMLINK libspdk_bdev_delay.so 00:03:47.187 SYMLINK libspdk_bdev_zone_block.so 00:03:47.187 SYMLINK libspdk_bdev_malloc.so 00:03:47.187 SYMLINK libspdk_bdev_iscsi.so 00:03:47.187 LIB libspdk_bdev_lvol.a 00:03:47.187 LIB libspdk_bdev_virtio.a 00:03:47.187 SO libspdk_bdev_lvol.so.5.0 00:03:47.187 SO libspdk_bdev_virtio.so.5.0 00:03:47.187 SYMLINK libspdk_bdev_lvol.so 00:03:47.187 SYMLINK libspdk_bdev_virtio.so 00:03:47.445 LIB libspdk_bdev_raid.a 00:03:47.701 SO libspdk_bdev_raid.so.5.0 00:03:47.701 SYMLINK libspdk_bdev_raid.so 00:03:48.633 LIB libspdk_bdev_nvme.a 00:03:48.633 SO libspdk_bdev_nvme.so.6.0 00:03:48.890 SYMLINK libspdk_bdev_nvme.so 00:03:49.148 CC module/event/subsystems/vmd/vmd.o 00:03:49.148 CC module/event/subsystems/iobuf/iobuf.o 00:03:49.148 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:49.148 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:49.148 CC module/event/subsystems/scheduler/scheduler.o 00:03:49.148 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:49.148 CC module/event/subsystems/sock/sock.o 00:03:49.148 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:49.148 LIB libspdk_event_sock.a 00:03:49.148 LIB libspdk_event_vhost_blk.a 00:03:49.148 LIB libspdk_event_scheduler.a 00:03:49.148 LIB libspdk_event_vmd.a 00:03:49.148 LIB libspdk_event_vfu_tgt.a 00:03:49.148 LIB libspdk_event_iobuf.a 00:03:49.148 SO libspdk_event_sock.so.4.0 00:03:49.148 SO libspdk_event_vhost_blk.so.2.0 00:03:49.148 SO libspdk_event_scheduler.so.3.0 00:03:49.148 SO libspdk_event_vfu_tgt.so.2.0 00:03:49.148 SO libspdk_event_vmd.so.5.0 00:03:49.406 SO libspdk_event_iobuf.so.2.0 00:03:49.406 SYMLINK libspdk_event_sock.so 00:03:49.406 SYMLINK libspdk_event_vhost_blk.so 00:03:49.406 SYMLINK libspdk_event_vfu_tgt.so 00:03:49.406 SYMLINK libspdk_event_scheduler.so 00:03:49.406 SYMLINK libspdk_event_vmd.so 00:03:49.406 SYMLINK libspdk_event_iobuf.so 00:03:49.406 CC module/event/subsystems/accel/accel.o 00:03:49.663 LIB libspdk_event_accel.a 00:03:49.663 SO libspdk_event_accel.so.5.0 00:03:49.663 SYMLINK libspdk_event_accel.so 00:03:49.921 CC module/event/subsystems/bdev/bdev.o 00:03:49.921 LIB libspdk_event_bdev.a 00:03:49.921 SO libspdk_event_bdev.so.5.0 00:03:49.921 SYMLINK libspdk_event_bdev.so 00:03:50.179 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:50.179 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:50.179 CC module/event/subsystems/scsi/scsi.o 00:03:50.179 CC module/event/subsystems/nbd/nbd.o 00:03:50.179 CC module/event/subsystems/ublk/ublk.o 00:03:50.179 LIB libspdk_event_nbd.a 00:03:50.179 LIB libspdk_event_ublk.a 00:03:50.436 LIB libspdk_event_scsi.a 00:03:50.436 SO libspdk_event_nbd.so.5.0 00:03:50.436 SO libspdk_event_ublk.so.2.0 00:03:50.436 SO libspdk_event_scsi.so.5.0 00:03:50.436 SYMLINK libspdk_event_nbd.so 00:03:50.436 SYMLINK libspdk_event_ublk.so 00:03:50.436 SYMLINK libspdk_event_scsi.so 00:03:50.436 LIB libspdk_event_nvmf.a 00:03:50.436 SO libspdk_event_nvmf.so.5.0 00:03:50.436 SYMLINK libspdk_event_nvmf.so 00:03:50.436 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.436 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.694 LIB libspdk_event_vhost_scsi.a 00:03:50.694 LIB libspdk_event_iscsi.a 00:03:50.694 SO libspdk_event_vhost_scsi.so.2.0 00:03:50.694 SO libspdk_event_iscsi.so.5.0 00:03:50.694 SYMLINK libspdk_event_vhost_scsi.so 00:03:50.694 SYMLINK libspdk_event_iscsi.so 00:03:50.694 SO libspdk.so.5.0 00:03:50.694 SYMLINK libspdk.so 00:03:50.957 CXX app/trace/trace.o 00:03:50.957 CC app/trace_record/trace_record.o 00:03:50.957 CC app/spdk_nvme_discover/discovery_aer.o 00:03:50.957 CC app/spdk_top/spdk_top.o 00:03:50.957 CC app/spdk_nvme_perf/perf.o 00:03:50.957 CC app/spdk_nvme_identify/identify.o 00:03:50.957 CC app/spdk_lspci/spdk_lspci.o 00:03:50.957 TEST_HEADER include/spdk/accel.h 00:03:50.957 TEST_HEADER include/spdk/accel_module.h 00:03:50.957 CC test/rpc_client/rpc_client_test.o 00:03:50.957 TEST_HEADER include/spdk/assert.h 00:03:50.957 TEST_HEADER include/spdk/barrier.h 00:03:50.957 TEST_HEADER include/spdk/base64.h 00:03:50.957 TEST_HEADER include/spdk/bdev.h 00:03:50.957 TEST_HEADER include/spdk/bdev_module.h 00:03:50.957 TEST_HEADER include/spdk/bdev_zone.h 00:03:50.957 TEST_HEADER include/spdk/bit_array.h 00:03:50.957 TEST_HEADER include/spdk/bit_pool.h 00:03:50.957 TEST_HEADER include/spdk/blob_bdev.h 00:03:50.957 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:50.957 TEST_HEADER include/spdk/blobfs.h 00:03:50.957 TEST_HEADER include/spdk/blob.h 00:03:50.957 TEST_HEADER include/spdk/conf.h 00:03:50.957 TEST_HEADER include/spdk/config.h 00:03:50.957 TEST_HEADER include/spdk/cpuset.h 00:03:50.957 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:50.957 TEST_HEADER include/spdk/crc16.h 00:03:50.957 CC app/spdk_dd/spdk_dd.o 00:03:50.957 TEST_HEADER include/spdk/crc32.h 00:03:50.957 TEST_HEADER include/spdk/crc64.h 00:03:50.957 CC app/iscsi_tgt/iscsi_tgt.o 00:03:50.957 CC app/nvmf_tgt/nvmf_main.o 00:03:50.957 TEST_HEADER include/spdk/dif.h 00:03:50.957 CC examples/idxd/perf/perf.o 00:03:50.957 CC examples/nvme/hello_world/hello_world.o 00:03:50.957 CC examples/ioat/verify/verify.o 00:03:50.957 TEST_HEADER include/spdk/dma.h 00:03:50.957 CC examples/sock/hello_world/hello_sock.o 00:03:50.957 CC examples/nvme/reconnect/reconnect.o 00:03:50.957 CC examples/ioat/perf/perf.o 00:03:50.957 TEST_HEADER include/spdk/endian.h 00:03:50.957 CC examples/nvme/arbitration/arbitration.o 00:03:50.957 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:50.957 CC examples/nvme/abort/abort.o 00:03:50.957 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:50.957 CC app/fio/nvme/fio_plugin.o 00:03:50.957 TEST_HEADER include/spdk/env_dpdk.h 00:03:50.957 CC examples/accel/perf/accel_perf.o 00:03:50.957 TEST_HEADER include/spdk/env.h 00:03:50.957 CC app/vhost/vhost.o 00:03:50.957 CC examples/nvme/hotplug/hotplug.o 00:03:50.957 CC examples/util/zipf/zipf.o 00:03:50.957 TEST_HEADER include/spdk/event.h 00:03:50.957 TEST_HEADER include/spdk/fd_group.h 00:03:50.957 CC examples/vmd/lsvmd/lsvmd.o 00:03:50.957 TEST_HEADER include/spdk/fd.h 00:03:50.957 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.957 TEST_HEADER include/spdk/file.h 00:03:50.957 TEST_HEADER include/spdk/ftl.h 00:03:50.957 CC test/event/event_perf/event_perf.o 00:03:50.957 TEST_HEADER include/spdk/gpt_spec.h 00:03:51.224 TEST_HEADER include/spdk/hexlify.h 00:03:51.224 TEST_HEADER include/spdk/histogram_data.h 00:03:51.224 CC test/thread/poller_perf/poller_perf.o 00:03:51.224 CC test/nvme/aer/aer.o 00:03:51.224 TEST_HEADER include/spdk/idxd.h 00:03:51.224 CC app/spdk_tgt/spdk_tgt.o 00:03:51.224 TEST_HEADER include/spdk/idxd_spec.h 00:03:51.224 TEST_HEADER include/spdk/init.h 00:03:51.224 TEST_HEADER include/spdk/ioat.h 00:03:51.224 TEST_HEADER include/spdk/ioat_spec.h 00:03:51.224 TEST_HEADER include/spdk/iscsi_spec.h 00:03:51.224 CC examples/blob/cli/blobcli.o 00:03:51.224 TEST_HEADER include/spdk/json.h 00:03:51.224 CC examples/nvmf/nvmf/nvmf.o 00:03:51.224 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.224 TEST_HEADER include/spdk/jsonrpc.h 00:03:51.224 CC examples/blob/hello_world/hello_blob.o 00:03:51.224 CC examples/bdev/bdevperf/bdevperf.o 00:03:51.224 CC examples/thread/thread/thread_ex.o 00:03:51.224 TEST_HEADER include/spdk/likely.h 00:03:51.224 CC test/bdev/bdevio/bdevio.o 00:03:51.224 CC test/app/bdev_svc/bdev_svc.o 00:03:51.224 CC test/blobfs/mkfs/mkfs.o 00:03:51.224 TEST_HEADER include/spdk/log.h 00:03:51.224 CC test/accel/dif/dif.o 00:03:51.224 TEST_HEADER include/spdk/lvol.h 00:03:51.224 CC test/dma/test_dma/test_dma.o 00:03:51.224 TEST_HEADER include/spdk/memory.h 00:03:51.224 TEST_HEADER include/spdk/mmio.h 00:03:51.224 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:51.224 TEST_HEADER include/spdk/nbd.h 00:03:51.224 TEST_HEADER include/spdk/notify.h 00:03:51.224 TEST_HEADER include/spdk/nvme.h 00:03:51.224 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.224 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.224 CC test/env/mem_callbacks/mem_callbacks.o 00:03:51.224 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.224 CC test/lvol/esnap/esnap.o 00:03:51.224 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.224 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.224 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.224 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.224 TEST_HEADER include/spdk/nvmf.h 00:03:51.224 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.224 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.224 TEST_HEADER include/spdk/opal.h 00:03:51.224 TEST_HEADER include/spdk/opal_spec.h 00:03:51.224 TEST_HEADER include/spdk/pci_ids.h 00:03:51.224 TEST_HEADER include/spdk/pipe.h 00:03:51.224 TEST_HEADER include/spdk/queue.h 00:03:51.224 TEST_HEADER include/spdk/reduce.h 00:03:51.224 TEST_HEADER include/spdk/rpc.h 00:03:51.224 TEST_HEADER include/spdk/scheduler.h 00:03:51.224 TEST_HEADER include/spdk/scsi.h 00:03:51.224 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.224 TEST_HEADER include/spdk/sock.h 00:03:51.224 LINK spdk_lspci 00:03:51.224 TEST_HEADER include/spdk/stdinc.h 00:03:51.224 TEST_HEADER include/spdk/string.h 00:03:51.224 TEST_HEADER include/spdk/thread.h 00:03:51.224 TEST_HEADER include/spdk/trace.h 00:03:51.224 TEST_HEADER include/spdk/trace_parser.h 00:03:51.224 TEST_HEADER include/spdk/tree.h 00:03:51.224 TEST_HEADER include/spdk/ublk.h 00:03:51.224 TEST_HEADER include/spdk/util.h 00:03:51.224 TEST_HEADER include/spdk/uuid.h 00:03:51.224 TEST_HEADER include/spdk/version.h 00:03:51.224 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.224 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.224 TEST_HEADER include/spdk/vhost.h 00:03:51.224 TEST_HEADER include/spdk/vmd.h 00:03:51.224 TEST_HEADER include/spdk/xor.h 00:03:51.224 TEST_HEADER include/spdk/zipf.h 00:03:51.224 CXX test/cpp_headers/accel.o 00:03:51.486 LINK rpc_client_test 00:03:51.486 LINK lsvmd 00:03:51.486 LINK spdk_nvme_discover 00:03:51.486 LINK event_perf 00:03:51.486 LINK zipf 00:03:51.486 LINK poller_perf 00:03:51.486 LINK interrupt_tgt 00:03:51.486 LINK nvmf_tgt 00:03:51.486 LINK pmr_persistence 00:03:51.486 LINK cmb_copy 00:03:51.486 LINK spdk_trace_record 00:03:51.486 LINK vhost 00:03:51.486 LINK iscsi_tgt 00:03:51.486 LINK verify 00:03:51.486 LINK ioat_perf 00:03:51.486 LINK hello_world 00:03:51.486 LINK bdev_svc 00:03:51.486 LINK spdk_tgt 00:03:51.486 LINK hotplug 00:03:51.486 LINK hello_sock 00:03:51.486 LINK mkfs 00:03:51.746 LINK hello_blob 00:03:51.746 LINK thread 00:03:51.746 LINK hello_bdev 00:03:51.746 LINK aer 00:03:51.746 CXX test/cpp_headers/accel_module.o 00:03:51.746 CXX test/cpp_headers/assert.o 00:03:51.746 CC test/env/vtophys/vtophys.o 00:03:51.746 LINK nvmf 00:03:51.746 LINK reconnect 00:03:51.746 LINK arbitration 00:03:51.746 CXX test/cpp_headers/barrier.o 00:03:51.746 LINK spdk_dd 00:03:51.746 LINK idxd_perf 00:03:51.746 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:51.746 LINK spdk_trace 00:03:51.746 CC test/event/reactor/reactor.o 00:03:51.746 CXX test/cpp_headers/base64.o 00:03:51.746 LINK abort 00:03:51.746 CC examples/vmd/led/led.o 00:03:51.746 CC test/env/memory/memory_ut.o 00:03:51.746 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:51.746 CC app/fio/bdev/fio_plugin.o 00:03:51.746 CC test/app/histogram_perf/histogram_perf.o 00:03:52.010 CC test/app/jsoncat/jsoncat.o 00:03:52.010 LINK dif 00:03:52.010 LINK test_dma 00:03:52.010 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.010 LINK bdevio 00:03:52.010 LINK accel_perf 00:03:52.010 CXX test/cpp_headers/bdev.o 00:03:52.010 CC test/event/reactor_perf/reactor_perf.o 00:03:52.010 LINK nvme_fuzz 00:03:52.010 CC test/env/pci/pci_ut.o 00:03:52.010 CC test/nvme/reset/reset.o 00:03:52.010 CC test/event/app_repeat/app_repeat.o 00:03:52.010 LINK vtophys 00:03:52.010 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.010 LINK nvme_manage 00:03:52.010 CC test/nvme/sgl/sgl.o 00:03:52.010 CXX test/cpp_headers/bdev_module.o 00:03:52.010 CC test/event/scheduler/scheduler.o 00:03:52.010 CXX test/cpp_headers/bdev_zone.o 00:03:52.010 CXX test/cpp_headers/bit_array.o 00:03:52.010 CXX test/cpp_headers/bit_pool.o 00:03:52.010 CC test/nvme/e2edp/nvme_dp.o 00:03:52.010 LINK blobcli 00:03:52.010 CXX test/cpp_headers/blob_bdev.o 00:03:52.010 LINK reactor 00:03:52.010 LINK spdk_nvme 00:03:52.010 LINK env_dpdk_post_init 00:03:52.010 CC test/app/stub/stub.o 00:03:52.010 CC test/nvme/overhead/overhead.o 00:03:52.010 LINK led 00:03:52.010 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.010 CC test/nvme/err_injection/err_injection.o 00:03:52.010 CC test/nvme/startup/startup.o 00:03:52.269 CC test/nvme/reserve/reserve.o 00:03:52.269 LINK jsoncat 00:03:52.269 LINK histogram_perf 00:03:52.269 CC test/nvme/simple_copy/simple_copy.o 00:03:52.269 CC test/nvme/connect_stress/connect_stress.o 00:03:52.269 CC test/nvme/boot_partition/boot_partition.o 00:03:52.269 LINK reactor_perf 00:03:52.269 CC test/nvme/fused_ordering/fused_ordering.o 00:03:52.269 CC test/nvme/compliance/nvme_compliance.o 00:03:52.269 CXX test/cpp_headers/blobfs.o 00:03:52.269 CXX test/cpp_headers/blob.o 00:03:52.269 CXX test/cpp_headers/conf.o 00:03:52.269 LINK app_repeat 00:03:52.269 CXX test/cpp_headers/config.o 00:03:52.269 CXX test/cpp_headers/cpuset.o 00:03:52.269 CXX test/cpp_headers/crc16.o 00:03:52.269 LINK mem_callbacks 00:03:52.269 CXX test/cpp_headers/crc32.o 00:03:52.269 CC test/nvme/fdp/fdp.o 00:03:52.269 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:52.269 CC test/nvme/cuse/cuse.o 00:03:52.532 CXX test/cpp_headers/crc64.o 00:03:52.532 CXX test/cpp_headers/dif.o 00:03:52.532 CXX test/cpp_headers/dma.o 00:03:52.532 CXX test/cpp_headers/endian.o 00:03:52.532 CXX test/cpp_headers/env_dpdk.o 00:03:52.532 CXX test/cpp_headers/env.o 00:03:52.532 CXX test/cpp_headers/event.o 00:03:52.532 LINK reset 00:03:52.532 CXX test/cpp_headers/fd_group.o 00:03:52.532 LINK startup 00:03:52.532 LINK stub 00:03:52.532 LINK scheduler 00:03:52.532 CXX test/cpp_headers/fd.o 00:03:52.532 CXX test/cpp_headers/file.o 00:03:52.532 LINK spdk_nvme_perf 00:03:52.532 CXX test/cpp_headers/ftl.o 00:03:52.532 CXX test/cpp_headers/gpt_spec.o 00:03:52.532 LINK err_injection 00:03:52.532 LINK spdk_nvme_identify 00:03:52.532 LINK sgl 00:03:52.532 CXX test/cpp_headers/hexlify.o 00:03:52.532 LINK reserve 00:03:52.532 LINK boot_partition 00:03:52.532 CXX test/cpp_headers/histogram_data.o 00:03:52.532 LINK bdevperf 00:03:52.532 LINK spdk_top 00:03:52.532 LINK nvme_dp 00:03:52.532 LINK connect_stress 00:03:52.532 LINK overhead 00:03:52.532 CXX test/cpp_headers/idxd.o 00:03:52.532 CXX test/cpp_headers/idxd_spec.o 00:03:52.532 CXX test/cpp_headers/init.o 00:03:52.532 LINK pci_ut 00:03:52.794 CXX test/cpp_headers/ioat.o 00:03:52.794 LINK fused_ordering 00:03:52.794 LINK simple_copy 00:03:52.794 CXX test/cpp_headers/ioat_spec.o 00:03:52.794 CXX test/cpp_headers/iscsi_spec.o 00:03:52.794 CXX test/cpp_headers/json.o 00:03:52.794 CXX test/cpp_headers/jsonrpc.o 00:03:52.794 LINK doorbell_aers 00:03:52.794 CXX test/cpp_headers/likely.o 00:03:52.794 LINK vhost_fuzz 00:03:52.794 CXX test/cpp_headers/log.o 00:03:52.794 CXX test/cpp_headers/lvol.o 00:03:52.794 CXX test/cpp_headers/memory.o 00:03:52.794 CXX test/cpp_headers/mmio.o 00:03:52.794 CXX test/cpp_headers/nbd.o 00:03:52.794 LINK spdk_bdev 00:03:52.794 CXX test/cpp_headers/notify.o 00:03:52.794 CXX test/cpp_headers/nvme.o 00:03:52.794 CXX test/cpp_headers/nvme_intel.o 00:03:52.794 LINK nvme_compliance 00:03:52.794 CXX test/cpp_headers/nvme_ocssd.o 00:03:52.794 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.794 CXX test/cpp_headers/nvme_spec.o 00:03:52.794 CXX test/cpp_headers/nvme_zns.o 00:03:52.794 CXX test/cpp_headers/nvmf_cmd.o 00:03:52.794 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:52.794 CXX test/cpp_headers/nvmf.o 00:03:52.794 CXX test/cpp_headers/nvmf_spec.o 00:03:52.794 CXX test/cpp_headers/nvmf_transport.o 00:03:52.794 CXX test/cpp_headers/opal.o 00:03:52.794 CXX test/cpp_headers/opal_spec.o 00:03:52.794 CXX test/cpp_headers/pipe.o 00:03:52.794 CXX test/cpp_headers/pci_ids.o 00:03:53.057 CXX test/cpp_headers/queue.o 00:03:53.057 CXX test/cpp_headers/reduce.o 00:03:53.057 CXX test/cpp_headers/rpc.o 00:03:53.057 CXX test/cpp_headers/scheduler.o 00:03:53.057 CXX test/cpp_headers/scsi.o 00:03:53.057 CXX test/cpp_headers/scsi_spec.o 00:03:53.057 CXX test/cpp_headers/sock.o 00:03:53.057 CXX test/cpp_headers/stdinc.o 00:03:53.057 CXX test/cpp_headers/string.o 00:03:53.057 CXX test/cpp_headers/thread.o 00:03:53.057 CXX test/cpp_headers/trace.o 00:03:53.057 CXX test/cpp_headers/trace_parser.o 00:03:53.057 LINK fdp 00:03:53.057 CXX test/cpp_headers/tree.o 00:03:53.057 CXX test/cpp_headers/ublk.o 00:03:53.057 CXX test/cpp_headers/util.o 00:03:53.057 CXX test/cpp_headers/uuid.o 00:03:53.057 CXX test/cpp_headers/version.o 00:03:53.057 CXX test/cpp_headers/vfio_user_pci.o 00:03:53.057 CXX test/cpp_headers/vfio_user_spec.o 00:03:53.057 CXX test/cpp_headers/vhost.o 00:03:53.057 CXX test/cpp_headers/vmd.o 00:03:53.057 CXX test/cpp_headers/xor.o 00:03:53.057 CXX test/cpp_headers/zipf.o 00:03:53.623 LINK memory_ut 00:03:53.881 LINK cuse 00:03:54.138 LINK iscsi_fuzz 00:03:56.661 LINK esnap 00:03:56.918 00:03:56.918 real 0m37.998s 00:03:56.918 user 7m14.418s 00:03:56.918 sys 1m37.161s 00:03:56.918 16:56:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:56.918 16:56:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:56.918 ************************************ 00:03:56.918 END TEST make 00:03:56.918 ************************************ 00:03:57.175 16:56:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.175 16:56:13 -- nvmf/common.sh@7 -- # uname -s 00:03:57.175 16:56:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.175 16:56:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.175 16:56:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.175 16:56:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.175 16:56:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.175 16:56:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.175 16:56:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.175 16:56:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.175 16:56:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.175 16:56:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.175 16:56:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.175 16:56:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:57.175 16:56:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.175 16:56:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.175 16:56:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.175 16:56:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.175 16:56:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.175 16:56:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.175 16:56:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.175 16:56:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.175 16:56:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.175 16:56:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.175 16:56:13 -- paths/export.sh@5 -- # export PATH 00:03:57.175 16:56:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.175 16:56:13 -- nvmf/common.sh@46 -- # : 0 00:03:57.175 16:56:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:57.175 16:56:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:57.175 16:56:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:57.175 16:56:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.175 16:56:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.175 16:56:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:57.175 16:56:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:57.175 16:56:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:57.175 16:56:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.175 16:56:13 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.175 16:56:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.175 16:56:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.175 16:56:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.175 16:56:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.175 16:56:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.175 16:56:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.175 16:56:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.175 16:56:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.175 16:56:13 -- spdk/autotest.sh@48 -- # udevadm_pid=385414 00:03:57.175 16:56:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.175 16:56:13 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:57.175 16:56:13 -- spdk/autotest.sh@54 -- # echo 385416 00:03:57.175 16:56:13 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:57.175 16:56:13 -- spdk/autotest.sh@56 -- # echo 385417 00:03:57.175 16:56:13 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:57.175 16:56:13 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:57.175 16:56:13 -- spdk/autotest.sh@60 -- # echo 385418 00:03:57.175 16:56:13 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:57.175 16:56:13 -- spdk/autotest.sh@62 -- # echo 385419 00:03:57.175 16:56:13 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:57.175 16:56:13 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:57.175 16:56:13 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:57.175 16:56:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:57.175 16:56:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.175 16:56:13 -- spdk/autotest.sh@70 -- # create_test_list 00:03:57.175 16:56:13 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:57.175 16:56:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:57.175 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:57.175 16:56:13 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:57.175 16:56:13 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.175 16:56:13 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.175 16:56:13 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:57.175 16:56:13 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.175 16:56:13 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:57.175 16:56:13 -- common/autotest_common.sh@1440 -- # uname 00:03:57.175 16:56:13 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:57.175 16:56:13 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:57.175 16:56:13 -- common/autotest_common.sh@1460 -- # uname 00:03:57.175 16:56:13 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:57.175 16:56:13 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:57.175 16:56:13 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:57.175 16:56:13 -- spdk/autotest.sh@83 -- # hash lcov 00:03:57.175 16:56:13 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:57.175 16:56:13 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:57.175 --rc lcov_branch_coverage=1 00:03:57.175 --rc lcov_function_coverage=1 00:03:57.175 --rc genhtml_branch_coverage=1 00:03:57.175 --rc genhtml_function_coverage=1 00:03:57.175 --rc genhtml_legend=1 00:03:57.175 --rc geninfo_all_blocks=1 00:03:57.175 ' 00:03:57.175 16:56:13 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:57.175 --rc lcov_branch_coverage=1 00:03:57.175 --rc lcov_function_coverage=1 00:03:57.175 --rc genhtml_branch_coverage=1 00:03:57.175 --rc genhtml_function_coverage=1 00:03:57.175 --rc genhtml_legend=1 00:03:57.175 --rc geninfo_all_blocks=1 00:03:57.175 ' 00:03:57.175 16:56:13 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:57.175 --rc lcov_branch_coverage=1 00:03:57.175 --rc lcov_function_coverage=1 00:03:57.175 --rc genhtml_branch_coverage=1 00:03:57.175 --rc genhtml_function_coverage=1 00:03:57.175 --rc genhtml_legend=1 00:03:57.175 --rc geninfo_all_blocks=1 00:03:57.175 --no-external' 00:03:57.175 16:56:13 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:57.175 --rc lcov_branch_coverage=1 00:03:57.175 --rc lcov_function_coverage=1 00:03:57.175 --rc genhtml_branch_coverage=1 00:03:57.175 --rc genhtml_function_coverage=1 00:03:57.175 --rc genhtml_legend=1 00:03:57.175 --rc geninfo_all_blocks=1 00:03:57.175 --no-external' 00:03:57.175 16:56:13 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:57.175 lcov: LCOV version 1.14 00:03:57.175 16:56:13 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:59.115 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:59.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:59.116 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:59.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:59.374 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:17.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:17.456 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:17.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:17.456 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:17.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:17.456 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:39.373 16:56:52 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:39.373 16:56:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:39.373 16:56:52 -- common/autotest_common.sh@10 -- # set +x 00:04:39.373 16:56:52 -- spdk/autotest.sh@102 -- # rm -f 00:04:39.373 16:56:52 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.373 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:39.373 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:39.373 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:39.373 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:39.373 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:39.373 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:39.373 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:39.373 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:39.373 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:39.373 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:39.373 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:39.373 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:39.373 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:39.373 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:39.373 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:39.373 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:39.373 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:39.373 16:56:53 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:39.373 16:56:53 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.373 16:56:53 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.373 16:56:53 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.373 16:56:53 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.373 16:56:53 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.373 16:56:53 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.373 16:56:53 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.373 16:56:53 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.373 16:56:53 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:39.373 16:56:53 -- spdk/autotest.sh@121 -- # grep -v p 00:04:39.373 16:56:53 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:39.373 16:56:53 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:39.373 16:56:53 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:39.373 16:56:53 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:39.373 16:56:53 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:39.373 16:56:53 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.373 No valid GPT data, bailing 00:04:39.373 16:56:53 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.373 16:56:53 -- scripts/common.sh@393 -- # pt= 00:04:39.373 16:56:53 -- scripts/common.sh@394 -- # return 1 00:04:39.373 16:56:53 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.373 1+0 records in 00:04:39.373 1+0 records out 00:04:39.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00197328 s, 531 MB/s 00:04:39.373 16:56:53 -- spdk/autotest.sh@129 -- # sync 00:04:39.373 16:56:53 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.373 16:56:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.373 16:56:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.632 16:56:55 -- spdk/autotest.sh@135 -- # uname -s 00:04:39.632 16:56:55 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:39.632 16:56:55 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.632 16:56:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.632 16:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.632 16:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:39.632 ************************************ 00:04:39.632 START TEST setup.sh 00:04:39.632 ************************************ 00:04:39.632 16:56:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.632 * Looking for test storage... 00:04:39.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.632 16:56:55 -- setup/test-setup.sh@10 -- # uname -s 00:04:39.632 16:56:55 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.632 16:56:55 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.632 16:56:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.632 16:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.632 16:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:39.632 ************************************ 00:04:39.632 START TEST acl 00:04:39.632 ************************************ 00:04:39.632 16:56:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.632 * Looking for test storage... 00:04:39.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.632 16:56:55 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.632 16:56:55 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.632 16:56:55 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.632 16:56:55 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.632 16:56:55 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.632 16:56:55 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.632 16:56:55 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.632 16:56:55 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.632 16:56:55 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.632 16:56:55 -- setup/acl.sh@12 -- # devs=() 00:04:39.632 16:56:55 -- setup/acl.sh@12 -- # declare -a devs 00:04:39.632 16:56:55 -- setup/acl.sh@13 -- # drivers=() 00:04:39.632 16:56:55 -- setup/acl.sh@13 -- # declare -A drivers 00:04:39.632 16:56:55 -- setup/acl.sh@51 -- # setup reset 00:04:39.632 16:56:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.632 16:56:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.006 16:56:57 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:41.006 16:56:57 -- setup/acl.sh@16 -- # local dev driver 00:04:41.006 16:56:57 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.006 16:56:57 -- setup/acl.sh@15 -- # setup output status 00:04:41.006 16:56:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.006 16:56:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.375 Hugepages 00:04:42.375 node hugesize free / total 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 00:04:42.375 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.375 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.375 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:42.375 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # continue 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.376 16:56:58 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:42.376 16:56:58 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.376 16:56:58 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.376 16:56:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.376 16:56:58 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:42.376 16:56:58 -- setup/acl.sh@54 -- # run_test denied denied 00:04:42.376 16:56:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.376 16:56:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.376 16:56:58 -- common/autotest_common.sh@10 -- # set +x 00:04:42.376 ************************************ 00:04:42.376 START TEST denied 00:04:42.376 ************************************ 00:04:42.376 16:56:58 -- common/autotest_common.sh@1104 -- # denied 00:04:42.376 16:56:58 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:42.376 16:56:58 -- setup/acl.sh@38 -- # setup output config 00:04:42.376 16:56:58 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:42.376 16:56:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.376 16:56:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.749 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:43.749 16:56:59 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:43.749 16:56:59 -- setup/acl.sh@28 -- # local dev driver 00:04:43.749 16:56:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:43.749 16:56:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:43.749 16:56:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:43.749 16:56:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:43.749 16:56:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:43.749 16:56:59 -- setup/acl.sh@41 -- # setup reset 00:04:43.749 16:56:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.749 16:56:59 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.296 00:04:46.296 real 0m3.608s 00:04:46.296 user 0m1.109s 00:04:46.296 sys 0m1.717s 00:04:46.296 16:57:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.296 16:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.296 ************************************ 00:04:46.296 END TEST denied 00:04:46.296 ************************************ 00:04:46.296 16:57:01 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:46.296 16:57:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.296 16:57:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.296 16:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:46.296 ************************************ 00:04:46.296 START TEST allowed 00:04:46.296 ************************************ 00:04:46.296 16:57:02 -- common/autotest_common.sh@1104 -- # allowed 00:04:46.296 16:57:02 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:46.296 16:57:02 -- setup/acl.sh@45 -- # setup output config 00:04:46.296 16:57:02 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:46.296 16:57:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.296 16:57:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.191 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.191 16:57:04 -- setup/acl.sh@47 -- # verify 00:04:48.191 16:57:04 -- setup/acl.sh@28 -- # local dev driver 00:04:48.191 16:57:04 -- setup/acl.sh@48 -- # setup reset 00:04:48.191 16:57:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.191 16:57:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.563 00:04:49.563 real 0m3.612s 00:04:49.563 user 0m0.955s 00:04:49.563 sys 0m1.576s 00:04:49.563 16:57:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.563 16:57:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.563 ************************************ 00:04:49.563 END TEST allowed 00:04:49.563 ************************************ 00:04:49.563 00:04:49.563 real 0m10.016s 00:04:49.563 user 0m3.131s 00:04:49.563 sys 0m5.118s 00:04:49.563 16:57:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.563 16:57:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.563 ************************************ 00:04:49.563 END TEST acl 00:04:49.563 ************************************ 00:04:49.563 16:57:05 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.563 16:57:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.563 16:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.563 16:57:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.563 ************************************ 00:04:49.563 START TEST hugepages 00:04:49.563 ************************************ 00:04:49.563 16:57:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.563 * Looking for test storage... 00:04:49.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.563 16:57:05 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:49.563 16:57:05 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:49.563 16:57:05 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:49.563 16:57:05 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:49.563 16:57:05 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:49.563 16:57:05 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:49.563 16:57:05 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:49.563 16:57:05 -- setup/common.sh@18 -- # local node= 00:04:49.563 16:57:05 -- setup/common.sh@19 -- # local var val 00:04:49.563 16:57:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.563 16:57:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.563 16:57:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.563 16:57:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.563 16:57:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.563 16:57:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.563 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.563 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 41113632 kB' 'MemAvailable: 44628180 kB' 'Buffers: 2704 kB' 'Cached: 12773964 kB' 'SwapCached: 0 kB' 'Active: 9763496 kB' 'Inactive: 3508168 kB' 'Active(anon): 9367916 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498372 kB' 'Mapped: 212996 kB' 'Shmem: 8872920 kB' 'KReclaimable: 209276 kB' 'Slab: 599716 kB' 'SReclaimable: 209276 kB' 'SUnreclaim: 390440 kB' 'KernelStack: 13040 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 10546580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.564 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.564 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.823 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.823 16:57:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # continue 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.824 16:57:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.824 16:57:05 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.824 16:57:05 -- setup/common.sh@33 -- # echo 2048 00:04:49.824 16:57:05 -- setup/common.sh@33 -- # return 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:49.824 16:57:05 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:49.824 16:57:05 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:49.824 16:57:05 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:49.824 16:57:05 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:49.824 16:57:05 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:49.824 16:57:05 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:49.824 16:57:05 -- setup/hugepages.sh@207 -- # get_nodes 00:04:49.824 16:57:05 -- setup/hugepages.sh@27 -- # local node 00:04:49.824 16:57:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.824 16:57:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:49.824 16:57:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.824 16:57:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.824 16:57:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.824 16:57:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.824 16:57:05 -- setup/hugepages.sh@208 -- # clear_hp 00:04:49.824 16:57:05 -- setup/hugepages.sh@37 -- # local node hp 00:04:49.824 16:57:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.824 16:57:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.824 16:57:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.824 16:57:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.824 16:57:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.824 16:57:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.824 16:57:05 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.824 16:57:05 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.824 16:57:05 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:49.824 16:57:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.824 16:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.824 16:57:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.824 ************************************ 00:04:49.824 START TEST default_setup 00:04:49.824 ************************************ 00:04:49.824 16:57:05 -- common/autotest_common.sh@1104 -- # default_setup 00:04:49.824 16:57:05 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.824 16:57:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.824 16:57:05 -- setup/hugepages.sh@51 -- # shift 00:04:49.824 16:57:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.824 16:57:05 -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.824 16:57:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.824 16:57:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.824 16:57:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.824 16:57:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.824 16:57:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.824 16:57:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.824 16:57:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.824 16:57:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.824 16:57:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.824 16:57:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.824 16:57:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.824 16:57:05 -- setup/hugepages.sh@73 -- # return 0 00:04:49.824 16:57:05 -- setup/hugepages.sh@137 -- # setup output 00:04:49.824 16:57:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.824 16:57:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.195 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.195 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.195 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:52.129 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.129 16:57:08 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:52.130 16:57:08 -- setup/hugepages.sh@89 -- # local node 00:04:52.130 16:57:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.130 16:57:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.130 16:57:08 -- setup/hugepages.sh@92 -- # local surp 00:04:52.130 16:57:08 -- setup/hugepages.sh@93 -- # local resv 00:04:52.130 16:57:08 -- setup/hugepages.sh@94 -- # local anon 00:04:52.130 16:57:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.130 16:57:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.130 16:57:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.130 16:57:08 -- setup/common.sh@18 -- # local node= 00:04:52.130 16:57:08 -- setup/common.sh@19 -- # local var val 00:04:52.130 16:57:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.130 16:57:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.130 16:57:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.130 16:57:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.130 16:57:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.130 16:57:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43247272 kB' 'MemAvailable: 46761836 kB' 'Buffers: 2704 kB' 'Cached: 12774052 kB' 'SwapCached: 0 kB' 'Active: 9780700 kB' 'Inactive: 3508168 kB' 'Active(anon): 9385120 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515884 kB' 'Mapped: 212540 kB' 'Shmem: 8873008 kB' 'KReclaimable: 209308 kB' 'Slab: 599520 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390212 kB' 'KernelStack: 13280 kB' 'PageTables: 10576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10563368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197544 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.130 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.130 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.131 16:57:08 -- setup/common.sh@33 -- # echo 0 00:04:52.131 16:57:08 -- setup/common.sh@33 -- # return 0 00:04:52.131 16:57:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.131 16:57:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.131 16:57:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.131 16:57:08 -- setup/common.sh@18 -- # local node= 00:04:52.131 16:57:08 -- setup/common.sh@19 -- # local var val 00:04:52.131 16:57:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.131 16:57:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.131 16:57:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.131 16:57:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.131 16:57:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.131 16:57:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43244804 kB' 'MemAvailable: 46759368 kB' 'Buffers: 2704 kB' 'Cached: 12774052 kB' 'SwapCached: 0 kB' 'Active: 9782216 kB' 'Inactive: 3508168 kB' 'Active(anon): 9386636 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517248 kB' 'Mapped: 212636 kB' 'Shmem: 8873008 kB' 'KReclaimable: 209308 kB' 'Slab: 599492 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390184 kB' 'KernelStack: 12880 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10567164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.131 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.131 16:57:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.132 16:57:08 -- setup/common.sh@33 -- # echo 0 00:04:52.132 16:57:08 -- setup/common.sh@33 -- # return 0 00:04:52.132 16:57:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.132 16:57:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.132 16:57:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.132 16:57:08 -- setup/common.sh@18 -- # local node= 00:04:52.132 16:57:08 -- setup/common.sh@19 -- # local var val 00:04:52.132 16:57:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.132 16:57:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.132 16:57:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.132 16:57:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.132 16:57:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.132 16:57:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43244896 kB' 'MemAvailable: 46759460 kB' 'Buffers: 2704 kB' 'Cached: 12774064 kB' 'SwapCached: 0 kB' 'Active: 9781896 kB' 'Inactive: 3508168 kB' 'Active(anon): 9386316 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516608 kB' 'Mapped: 213012 kB' 'Shmem: 8873020 kB' 'KReclaimable: 209308 kB' 'Slab: 599508 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390200 kB' 'KernelStack: 13024 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10567180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.132 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.132 16:57:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.133 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.133 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.133 16:57:08 -- setup/common.sh@33 -- # echo 0 00:04:52.133 16:57:08 -- setup/common.sh@33 -- # return 0 00:04:52.133 16:57:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.133 16:57:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.133 nr_hugepages=1024 00:04:52.133 16:57:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.133 resv_hugepages=0 00:04:52.133 16:57:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.133 surplus_hugepages=0 00:04:52.133 16:57:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.133 anon_hugepages=0 00:04:52.134 16:57:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.134 16:57:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.134 16:57:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.134 16:57:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.134 16:57:08 -- setup/common.sh@18 -- # local node= 00:04:52.134 16:57:08 -- setup/common.sh@19 -- # local var val 00:04:52.134 16:57:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.134 16:57:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.134 16:57:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.134 16:57:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.134 16:57:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.134 16:57:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43253444 kB' 'MemAvailable: 46768008 kB' 'Buffers: 2704 kB' 'Cached: 12774076 kB' 'SwapCached: 0 kB' 'Active: 9781204 kB' 'Inactive: 3508168 kB' 'Active(anon): 9385624 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515852 kB' 'Mapped: 212936 kB' 'Shmem: 8873032 kB' 'KReclaimable: 209308 kB' 'Slab: 599516 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390208 kB' 'KernelStack: 13024 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10567192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.134 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.134 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.135 16:57:08 -- setup/common.sh@33 -- # echo 1024 00:04:52.135 16:57:08 -- setup/common.sh@33 -- # return 0 00:04:52.135 16:57:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.135 16:57:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.135 16:57:08 -- setup/hugepages.sh@27 -- # local node 00:04:52.135 16:57:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.135 16:57:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.135 16:57:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.135 16:57:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.135 16:57:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.135 16:57:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.135 16:57:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.135 16:57:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.135 16:57:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.135 16:57:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.135 16:57:08 -- setup/common.sh@18 -- # local node=0 00:04:52.135 16:57:08 -- setup/common.sh@19 -- # local var val 00:04:52.135 16:57:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.135 16:57:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.135 16:57:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.135 16:57:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.135 16:57:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.135 16:57:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20365640 kB' 'MemUsed: 12511300 kB' 'SwapCached: 0 kB' 'Active: 5917356 kB' 'Inactive: 3324284 kB' 'Active(anon): 5658344 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976144 kB' 'Mapped: 125672 kB' 'AnonPages: 268708 kB' 'Shmem: 5392848 kB' 'KernelStack: 6056 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327492 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.135 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.135 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # continue 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 16:57:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 16:57:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.136 16:57:08 -- setup/common.sh@33 -- # echo 0 00:04:52.136 16:57:08 -- setup/common.sh@33 -- # return 0 00:04:52.136 16:57:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.136 16:57:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.136 16:57:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.136 16:57:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.136 16:57:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.136 node0=1024 expecting 1024 00:04:52.136 16:57:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.136 00:04:52.136 real 0m2.476s 00:04:52.136 user 0m0.617s 00:04:52.136 sys 0m0.867s 00:04:52.136 16:57:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.136 16:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.136 ************************************ 00:04:52.136 END TEST default_setup 00:04:52.136 ************************************ 00:04:52.136 16:57:08 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:52.136 16:57:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.136 16:57:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.136 16:57:08 -- common/autotest_common.sh@10 -- # set +x 00:04:52.136 ************************************ 00:04:52.136 START TEST per_node_1G_alloc 00:04:52.136 ************************************ 00:04:52.136 16:57:08 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:52.136 16:57:08 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:52.136 16:57:08 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:52.136 16:57:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:52.136 16:57:08 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:52.136 16:57:08 -- setup/hugepages.sh@51 -- # shift 00:04:52.136 16:57:08 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:52.136 16:57:08 -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.136 16:57:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.136 16:57:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:52.136 16:57:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:52.136 16:57:08 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:52.136 16:57:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.136 16:57:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:52.136 16:57:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.136 16:57:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.136 16:57:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.136 16:57:08 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:52.136 16:57:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.136 16:57:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:52.136 16:57:08 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.136 16:57:08 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:52.136 16:57:08 -- setup/hugepages.sh@73 -- # return 0 00:04:52.136 16:57:08 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:52.136 16:57:08 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:52.136 16:57:08 -- setup/hugepages.sh@146 -- # setup output 00:04:52.136 16:57:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.136 16:57:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:53.512 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.512 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:53.512 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.512 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.512 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.512 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.512 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.512 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.512 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.512 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:53.512 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:53.512 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:53.512 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:53.512 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:53.512 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:53.512 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:53.512 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:53.512 16:57:09 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:53.512 16:57:09 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:53.512 16:57:09 -- setup/hugepages.sh@89 -- # local node 00:04:53.512 16:57:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:53.512 16:57:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:53.512 16:57:09 -- setup/hugepages.sh@92 -- # local surp 00:04:53.512 16:57:09 -- setup/hugepages.sh@93 -- # local resv 00:04:53.512 16:57:09 -- setup/hugepages.sh@94 -- # local anon 00:04:53.512 16:57:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:53.512 16:57:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:53.512 16:57:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:53.512 16:57:09 -- setup/common.sh@18 -- # local node= 00:04:53.512 16:57:09 -- setup/common.sh@19 -- # local var val 00:04:53.512 16:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.512 16:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.512 16:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.512 16:57:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.512 16:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.512 16:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43237388 kB' 'MemAvailable: 46751952 kB' 'Buffers: 2704 kB' 'Cached: 12774136 kB' 'SwapCached: 0 kB' 'Active: 9774896 kB' 'Inactive: 3508168 kB' 'Active(anon): 9379316 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509844 kB' 'Mapped: 212536 kB' 'Shmem: 8873092 kB' 'KReclaimable: 209308 kB' 'Slab: 599620 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390312 kB' 'KernelStack: 12912 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10559540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.512 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.512 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:53.513 16:57:09 -- setup/common.sh@33 -- # echo 0 00:04:53.513 16:57:09 -- setup/common.sh@33 -- # return 0 00:04:53.513 16:57:09 -- setup/hugepages.sh@97 -- # anon=0 00:04:53.513 16:57:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:53.513 16:57:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.513 16:57:09 -- setup/common.sh@18 -- # local node= 00:04:53.513 16:57:09 -- setup/common.sh@19 -- # local var val 00:04:53.513 16:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.513 16:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.513 16:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.513 16:57:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.513 16:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.513 16:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43244496 kB' 'MemAvailable: 46759060 kB' 'Buffers: 2704 kB' 'Cached: 12774136 kB' 'SwapCached: 0 kB' 'Active: 9775748 kB' 'Inactive: 3508168 kB' 'Active(anon): 9380168 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510268 kB' 'Mapped: 212536 kB' 'Shmem: 8873092 kB' 'KReclaimable: 209308 kB' 'Slab: 599620 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390312 kB' 'KernelStack: 12976 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10559552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197288 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.513 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.513 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.514 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.514 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.515 16:57:09 -- setup/common.sh@33 -- # echo 0 00:04:53.515 16:57:09 -- setup/common.sh@33 -- # return 0 00:04:53.515 16:57:09 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.515 16:57:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.515 16:57:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.515 16:57:09 -- setup/common.sh@18 -- # local node= 00:04:53.515 16:57:09 -- setup/common.sh@19 -- # local var val 00:04:53.515 16:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.515 16:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.515 16:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.515 16:57:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.515 16:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.515 16:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43244524 kB' 'MemAvailable: 46759088 kB' 'Buffers: 2704 kB' 'Cached: 12774136 kB' 'SwapCached: 0 kB' 'Active: 9775648 kB' 'Inactive: 3508168 kB' 'Active(anon): 9380068 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510164 kB' 'Mapped: 212536 kB' 'Shmem: 8873092 kB' 'KReclaimable: 209308 kB' 'Slab: 599620 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390312 kB' 'KernelStack: 12960 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10559564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197272 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.515 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.515 16:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.516 16:57:09 -- setup/common.sh@33 -- # echo 0 00:04:53.516 16:57:09 -- setup/common.sh@33 -- # return 0 00:04:53.516 16:57:09 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.516 16:57:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.516 nr_hugepages=1024 00:04:53.516 16:57:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.516 resv_hugepages=0 00:04:53.516 16:57:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.516 surplus_hugepages=0 00:04:53.516 16:57:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.516 anon_hugepages=0 00:04:53.516 16:57:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.516 16:57:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.516 16:57:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.516 16:57:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.516 16:57:09 -- setup/common.sh@18 -- # local node= 00:04:53.516 16:57:09 -- setup/common.sh@19 -- # local var val 00:04:53.516 16:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.516 16:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.516 16:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.516 16:57:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.516 16:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.516 16:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43244296 kB' 'MemAvailable: 46758860 kB' 'Buffers: 2704 kB' 'Cached: 12774148 kB' 'SwapCached: 0 kB' 'Active: 9775072 kB' 'Inactive: 3508168 kB' 'Active(anon): 9379492 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509532 kB' 'Mapped: 212024 kB' 'Shmem: 8873104 kB' 'KReclaimable: 209308 kB' 'Slab: 599640 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390332 kB' 'KernelStack: 12944 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10557904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.516 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.516 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.517 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.517 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.517 16:57:09 -- setup/common.sh@33 -- # echo 1024 00:04:53.517 16:57:09 -- setup/common.sh@33 -- # return 0 00:04:53.517 16:57:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.517 16:57:09 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.517 16:57:09 -- setup/hugepages.sh@27 -- # local node 00:04:53.517 16:57:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.517 16:57:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.517 16:57:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.517 16:57:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:53.517 16:57:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.518 16:57:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.518 16:57:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.518 16:57:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.518 16:57:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.518 16:57:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.518 16:57:09 -- setup/common.sh@18 -- # local node=0 00:04:53.518 16:57:09 -- setup/common.sh@19 -- # local var val 00:04:53.518 16:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.518 16:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.518 16:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.518 16:57:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.518 16:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.518 16:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21415000 kB' 'MemUsed: 11461940 kB' 'SwapCached: 0 kB' 'Active: 5918676 kB' 'Inactive: 3324284 kB' 'Active(anon): 5659664 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976260 kB' 'Mapped: 125512 kB' 'AnonPages: 269920 kB' 'Shmem: 5392964 kB' 'KernelStack: 6072 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327564 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.518 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.518 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@33 -- # echo 0 00:04:53.519 16:57:09 -- setup/common.sh@33 -- # return 0 00:04:53.519 16:57:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.519 16:57:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.519 16:57:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.519 16:57:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:53.519 16:57:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.519 16:57:09 -- setup/common.sh@18 -- # local node=1 00:04:53.519 16:57:09 -- setup/common.sh@19 -- # local var val 00:04:53.519 16:57:09 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.519 16:57:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.519 16:57:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:53.519 16:57:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:53.519 16:57:09 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.519 16:57:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 21829160 kB' 'MemUsed: 5835620 kB' 'SwapCached: 0 kB' 'Active: 3858108 kB' 'Inactive: 183884 kB' 'Active(anon): 3721540 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3800612 kB' 'Mapped: 87024 kB' 'AnonPages: 241372 kB' 'Shmem: 3480160 kB' 'KernelStack: 6904 kB' 'PageTables: 4732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97516 kB' 'Slab: 272128 kB' 'SReclaimable: 97516 kB' 'SUnreclaim: 174612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.519 16:57:09 -- setup/common.sh@32 -- # continue 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.519 16:57:09 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.520 16:57:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.520 16:57:09 -- setup/common.sh@33 -- # echo 0 00:04:53.520 16:57:09 -- setup/common.sh@33 -- # return 0 00:04:53.520 16:57:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.520 16:57:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.520 16:57:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.520 16:57:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.520 16:57:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:53.520 node0=512 expecting 512 00:04:53.520 16:57:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.520 16:57:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.520 16:57:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.520 16:57:09 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:53.520 node1=512 expecting 512 00:04:53.520 16:57:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:53.520 00:04:53.520 real 0m1.389s 00:04:53.520 user 0m0.556s 00:04:53.520 sys 0m0.797s 00:04:53.520 16:57:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.520 16:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:53.520 ************************************ 00:04:53.520 END TEST per_node_1G_alloc 00:04:53.520 ************************************ 00:04:53.777 16:57:09 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:53.777 16:57:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.777 16:57:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.777 16:57:09 -- common/autotest_common.sh@10 -- # set +x 00:04:53.777 ************************************ 00:04:53.777 START TEST even_2G_alloc 00:04:53.777 ************************************ 00:04:53.777 16:57:09 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:53.777 16:57:09 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:53.777 16:57:09 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:53.777 16:57:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:53.777 16:57:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:53.777 16:57:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:53.777 16:57:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.777 16:57:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:53.777 16:57:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.777 16:57:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.777 16:57:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.777 16:57:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.777 16:57:09 -- setup/hugepages.sh@83 -- # : 512 00:04:53.777 16:57:09 -- setup/hugepages.sh@84 -- # : 1 00:04:53.777 16:57:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:53.777 16:57:09 -- setup/hugepages.sh@83 -- # : 0 00:04:53.777 16:57:09 -- setup/hugepages.sh@84 -- # : 0 00:04:53.777 16:57:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:53.777 16:57:09 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:53.777 16:57:09 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:53.777 16:57:09 -- setup/hugepages.sh@153 -- # setup output 00:04:53.777 16:57:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.777 16:57:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.728 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.728 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.728 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.728 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.728 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.728 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.728 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.728 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.728 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.728 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.728 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.728 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.728 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.728 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.728 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.728 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.728 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.728 16:57:10 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:54.728 16:57:10 -- setup/hugepages.sh@89 -- # local node 00:04:54.728 16:57:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.728 16:57:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.728 16:57:10 -- setup/hugepages.sh@92 -- # local surp 00:04:54.728 16:57:10 -- setup/hugepages.sh@93 -- # local resv 00:04:54.728 16:57:10 -- setup/hugepages.sh@94 -- # local anon 00:04:54.728 16:57:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.728 16:57:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.728 16:57:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.728 16:57:10 -- setup/common.sh@18 -- # local node= 00:04:54.728 16:57:10 -- setup/common.sh@19 -- # local var val 00:04:54.728 16:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.728 16:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.728 16:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.728 16:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.728 16:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.728 16:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43235316 kB' 'MemAvailable: 46749880 kB' 'Buffers: 2704 kB' 'Cached: 12774228 kB' 'SwapCached: 0 kB' 'Active: 9780616 kB' 'Inactive: 3508168 kB' 'Active(anon): 9385036 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515080 kB' 'Mapped: 212548 kB' 'Shmem: 8873184 kB' 'KReclaimable: 209308 kB' 'Slab: 599564 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390256 kB' 'KernelStack: 12944 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10564376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197308 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.728 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.728 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.729 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.729 16:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.729 16:57:10 -- setup/common.sh@33 -- # echo 0 00:04:54.729 16:57:10 -- setup/common.sh@33 -- # return 0 00:04:54.729 16:57:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.729 16:57:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.729 16:57:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.729 16:57:10 -- setup/common.sh@18 -- # local node= 00:04:54.729 16:57:10 -- setup/common.sh@19 -- # local var val 00:04:54.729 16:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.729 16:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.729 16:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.729 16:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.729 16:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.990 16:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43236756 kB' 'MemAvailable: 46751320 kB' 'Buffers: 2704 kB' 'Cached: 12774228 kB' 'SwapCached: 0 kB' 'Active: 9774216 kB' 'Inactive: 3508168 kB' 'Active(anon): 9378636 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508704 kB' 'Mapped: 212184 kB' 'Shmem: 8873184 kB' 'KReclaimable: 209308 kB' 'Slab: 599636 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390328 kB' 'KernelStack: 12960 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10558268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.990 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.990 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.991 16:57:10 -- setup/common.sh@33 -- # echo 0 00:04:54.991 16:57:10 -- setup/common.sh@33 -- # return 0 00:04:54.991 16:57:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.991 16:57:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.991 16:57:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.991 16:57:10 -- setup/common.sh@18 -- # local node= 00:04:54.991 16:57:10 -- setup/common.sh@19 -- # local var val 00:04:54.991 16:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.991 16:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.991 16:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.991 16:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.991 16:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.991 16:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43237524 kB' 'MemAvailable: 46752088 kB' 'Buffers: 2704 kB' 'Cached: 12774244 kB' 'SwapCached: 0 kB' 'Active: 9774420 kB' 'Inactive: 3508168 kB' 'Active(anon): 9378840 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508876 kB' 'Mapped: 212108 kB' 'Shmem: 8873200 kB' 'KReclaimable: 209308 kB' 'Slab: 599632 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390324 kB' 'KernelStack: 13008 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10558280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.991 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.991 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.992 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.992 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.992 16:57:10 -- setup/common.sh@33 -- # echo 0 00:04:54.992 16:57:10 -- setup/common.sh@33 -- # return 0 00:04:54.992 16:57:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.992 16:57:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.992 nr_hugepages=1024 00:04:54.992 16:57:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.992 resv_hugepages=0 00:04:54.992 16:57:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.992 surplus_hugepages=0 00:04:54.993 16:57:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.993 anon_hugepages=0 00:04:54.993 16:57:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.993 16:57:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.993 16:57:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.993 16:57:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.993 16:57:10 -- setup/common.sh@18 -- # local node= 00:04:54.993 16:57:10 -- setup/common.sh@19 -- # local var val 00:04:54.993 16:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.993 16:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.993 16:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.993 16:57:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.993 16:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.993 16:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43237272 kB' 'MemAvailable: 46751836 kB' 'Buffers: 2704 kB' 'Cached: 12774252 kB' 'SwapCached: 0 kB' 'Active: 9774208 kB' 'Inactive: 3508168 kB' 'Active(anon): 9378628 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508692 kB' 'Mapped: 212108 kB' 'Shmem: 8873208 kB' 'KReclaimable: 209308 kB' 'Slab: 599632 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390324 kB' 'KernelStack: 13008 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10558296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.993 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.993 16:57:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.994 16:57:10 -- setup/common.sh@33 -- # echo 1024 00:04:54.994 16:57:10 -- setup/common.sh@33 -- # return 0 00:04:54.994 16:57:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.994 16:57:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.994 16:57:10 -- setup/hugepages.sh@27 -- # local node 00:04:54.994 16:57:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.994 16:57:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.994 16:57:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.994 16:57:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.994 16:57:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.994 16:57:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.994 16:57:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.994 16:57:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.994 16:57:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.994 16:57:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.994 16:57:10 -- setup/common.sh@18 -- # local node=0 00:04:54.994 16:57:10 -- setup/common.sh@19 -- # local var val 00:04:54.994 16:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.994 16:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.994 16:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.994 16:57:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.994 16:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.994 16:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21412460 kB' 'MemUsed: 11464480 kB' 'SwapCached: 0 kB' 'Active: 5915696 kB' 'Inactive: 3324284 kB' 'Active(anon): 5656684 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976304 kB' 'Mapped: 125516 kB' 'AnonPages: 266812 kB' 'Shmem: 5393008 kB' 'KernelStack: 6008 kB' 'PageTables: 3764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327376 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.994 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.994 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@33 -- # echo 0 00:04:54.995 16:57:10 -- setup/common.sh@33 -- # return 0 00:04:54.995 16:57:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.995 16:57:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.995 16:57:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.995 16:57:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.995 16:57:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.995 16:57:10 -- setup/common.sh@18 -- # local node=1 00:04:54.995 16:57:10 -- setup/common.sh@19 -- # local var val 00:04:54.995 16:57:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.995 16:57:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.995 16:57:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.995 16:57:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.995 16:57:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.995 16:57:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 21827652 kB' 'MemUsed: 5837128 kB' 'SwapCached: 0 kB' 'Active: 3858140 kB' 'Inactive: 183884 kB' 'Active(anon): 3721572 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3800676 kB' 'Mapped: 86592 kB' 'AnonPages: 241484 kB' 'Shmem: 3480224 kB' 'KernelStack: 6984 kB' 'PageTables: 5060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97516 kB' 'Slab: 272256 kB' 'SReclaimable: 97516 kB' 'SUnreclaim: 174740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # continue 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.995 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.995 16:57:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:10 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.007 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.007 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.007 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # continue 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.008 16:57:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.008 16:57:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.008 16:57:11 -- setup/common.sh@33 -- # echo 0 00:04:55.008 16:57:11 -- setup/common.sh@33 -- # return 0 00:04:55.008 16:57:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.008 16:57:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.008 16:57:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.008 16:57:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.008 node0=512 expecting 512 00:04:55.008 16:57:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.008 16:57:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.008 16:57:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.008 16:57:11 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:55.008 node1=512 expecting 512 00:04:55.008 16:57:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.008 00:04:55.008 real 0m1.329s 00:04:55.008 user 0m0.565s 00:04:55.008 sys 0m0.725s 00:04:55.008 16:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.008 16:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.008 ************************************ 00:04:55.008 END TEST even_2G_alloc 00:04:55.008 ************************************ 00:04:55.008 16:57:11 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:55.008 16:57:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.008 16:57:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.008 16:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:55.008 ************************************ 00:04:55.008 START TEST odd_alloc 00:04:55.008 ************************************ 00:04:55.008 16:57:11 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:55.008 16:57:11 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:55.008 16:57:11 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:55.008 16:57:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:55.008 16:57:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.008 16:57:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.008 16:57:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.008 16:57:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:55.008 16:57:11 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.008 16:57:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.008 16:57:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.008 16:57:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:55.008 16:57:11 -- setup/hugepages.sh@83 -- # : 513 00:04:55.008 16:57:11 -- setup/hugepages.sh@84 -- # : 1 00:04:55.008 16:57:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:55.008 16:57:11 -- setup/hugepages.sh@83 -- # : 0 00:04:55.008 16:57:11 -- setup/hugepages.sh@84 -- # : 0 00:04:55.008 16:57:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.008 16:57:11 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:55.008 16:57:11 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:55.008 16:57:11 -- setup/hugepages.sh@160 -- # setup output 00:04:55.008 16:57:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.008 16:57:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.381 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.381 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.381 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.381 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.381 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.381 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.381 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.381 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.381 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.381 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.381 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.381 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.381 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.381 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.381 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.381 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.381 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.381 16:57:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:56.381 16:57:12 -- setup/hugepages.sh@89 -- # local node 00:04:56.381 16:57:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.381 16:57:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.381 16:57:12 -- setup/hugepages.sh@92 -- # local surp 00:04:56.381 16:57:12 -- setup/hugepages.sh@93 -- # local resv 00:04:56.381 16:57:12 -- setup/hugepages.sh@94 -- # local anon 00:04:56.381 16:57:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.381 16:57:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.381 16:57:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.381 16:57:12 -- setup/common.sh@18 -- # local node= 00:04:56.381 16:57:12 -- setup/common.sh@19 -- # local var val 00:04:56.381 16:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.381 16:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.381 16:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.381 16:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.381 16:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.381 16:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.381 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.381 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.381 16:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43276016 kB' 'MemAvailable: 46790580 kB' 'Buffers: 2704 kB' 'Cached: 12774324 kB' 'SwapCached: 0 kB' 'Active: 9767416 kB' 'Inactive: 3508168 kB' 'Active(anon): 9371836 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501864 kB' 'Mapped: 211316 kB' 'Shmem: 8873280 kB' 'KReclaimable: 209308 kB' 'Slab: 599444 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390136 kB' 'KernelStack: 12848 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10531700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.382 16:57:12 -- setup/common.sh@33 -- # echo 0 00:04:56.382 16:57:12 -- setup/common.sh@33 -- # return 0 00:04:56.382 16:57:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:56.382 16:57:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.382 16:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.382 16:57:12 -- setup/common.sh@18 -- # local node= 00:04:56.382 16:57:12 -- setup/common.sh@19 -- # local var val 00:04:56.382 16:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.382 16:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.382 16:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.382 16:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.382 16:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.382 16:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43275968 kB' 'MemAvailable: 46790532 kB' 'Buffers: 2704 kB' 'Cached: 12774328 kB' 'SwapCached: 0 kB' 'Active: 9767724 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372144 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502204 kB' 'Mapped: 211300 kB' 'Shmem: 8873284 kB' 'KReclaimable: 209308 kB' 'Slab: 599444 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390136 kB' 'KernelStack: 12864 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10531712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.382 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.382 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.383 16:57:12 -- setup/common.sh@33 -- # echo 0 00:04:56.383 16:57:12 -- setup/common.sh@33 -- # return 0 00:04:56.383 16:57:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:56.383 16:57:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.383 16:57:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.383 16:57:12 -- setup/common.sh@18 -- # local node= 00:04:56.383 16:57:12 -- setup/common.sh@19 -- # local var val 00:04:56.383 16:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.383 16:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.383 16:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.383 16:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.383 16:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.383 16:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43276216 kB' 'MemAvailable: 46790780 kB' 'Buffers: 2704 kB' 'Cached: 12774340 kB' 'SwapCached: 0 kB' 'Active: 9767636 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372056 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502040 kB' 'Mapped: 211224 kB' 'Shmem: 8873296 kB' 'KReclaimable: 209308 kB' 'Slab: 599416 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390108 kB' 'KernelStack: 12864 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10531728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.383 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.383 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.384 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.384 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.384 16:57:12 -- setup/common.sh@33 -- # echo 0 00:04:56.384 16:57:12 -- setup/common.sh@33 -- # return 0 00:04:56.384 16:57:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:56.384 16:57:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:56.384 nr_hugepages=1025 00:04:56.384 16:57:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.384 resv_hugepages=0 00:04:56.384 16:57:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.384 surplus_hugepages=0 00:04:56.384 16:57:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.384 anon_hugepages=0 00:04:56.384 16:57:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.384 16:57:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:56.384 16:57:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.384 16:57:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.384 16:57:12 -- setup/common.sh@18 -- # local node= 00:04:56.384 16:57:12 -- setup/common.sh@19 -- # local var val 00:04:56.384 16:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.384 16:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.384 16:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.384 16:57:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.384 16:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.385 16:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43276216 kB' 'MemAvailable: 46790780 kB' 'Buffers: 2704 kB' 'Cached: 12774352 kB' 'SwapCached: 0 kB' 'Active: 9767644 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372064 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502044 kB' 'Mapped: 211224 kB' 'Shmem: 8873308 kB' 'KReclaimable: 209308 kB' 'Slab: 599416 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390108 kB' 'KernelStack: 12864 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10531740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197160 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.385 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.385 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.386 16:57:12 -- setup/common.sh@33 -- # echo 1025 00:04:56.386 16:57:12 -- setup/common.sh@33 -- # return 0 00:04:56.386 16:57:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.386 16:57:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.386 16:57:12 -- setup/hugepages.sh@27 -- # local node 00:04:56.386 16:57:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.386 16:57:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.386 16:57:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.386 16:57:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:56.386 16:57:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.386 16:57:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.386 16:57:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.386 16:57:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.386 16:57:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.386 16:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.386 16:57:12 -- setup/common.sh@18 -- # local node=0 00:04:56.386 16:57:12 -- setup/common.sh@19 -- # local var val 00:04:56.386 16:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.386 16:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.386 16:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.386 16:57:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.386 16:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.386 16:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21423872 kB' 'MemUsed: 11453068 kB' 'SwapCached: 0 kB' 'Active: 5912476 kB' 'Inactive: 3324284 kB' 'Active(anon): 5653464 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976316 kB' 'Mapped: 124940 kB' 'AnonPages: 263652 kB' 'Shmem: 5393020 kB' 'KernelStack: 5960 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327224 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.386 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.386 16:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@33 -- # echo 0 00:04:56.387 16:57:12 -- setup/common.sh@33 -- # return 0 00:04:56.387 16:57:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.387 16:57:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.387 16:57:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.387 16:57:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.387 16:57:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.387 16:57:12 -- setup/common.sh@18 -- # local node=1 00:04:56.387 16:57:12 -- setup/common.sh@19 -- # local var val 00:04:56.387 16:57:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:56.387 16:57:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.387 16:57:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.387 16:57:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.387 16:57:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.387 16:57:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 21852780 kB' 'MemUsed: 5812000 kB' 'SwapCached: 0 kB' 'Active: 3855544 kB' 'Inactive: 183884 kB' 'Active(anon): 3718976 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3800768 kB' 'Mapped: 86284 kB' 'AnonPages: 238700 kB' 'Shmem: 3480316 kB' 'KernelStack: 6920 kB' 'PageTables: 4904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97516 kB' 'Slab: 272192 kB' 'SReclaimable: 97516 kB' 'SUnreclaim: 174676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.387 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.387 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # continue 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:56.388 16:57:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:56.388 16:57:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.388 16:57:12 -- setup/common.sh@33 -- # echo 0 00:04:56.388 16:57:12 -- setup/common.sh@33 -- # return 0 00:04:56.388 16:57:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.388 16:57:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.388 16:57:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.388 16:57:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.388 16:57:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:56.388 node0=512 expecting 513 00:04:56.388 16:57:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.388 16:57:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.388 16:57:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.388 16:57:12 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:56.388 node1=513 expecting 512 00:04:56.388 16:57:12 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:56.388 00:04:56.388 real 0m1.497s 00:04:56.388 user 0m0.613s 00:04:56.388 sys 0m0.852s 00:04:56.388 16:57:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.388 16:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:56.388 ************************************ 00:04:56.388 END TEST odd_alloc 00:04:56.388 ************************************ 00:04:56.645 16:57:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:56.645 16:57:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.645 16:57:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.645 16:57:12 -- common/autotest_common.sh@10 -- # set +x 00:04:56.645 ************************************ 00:04:56.645 START TEST custom_alloc 00:04:56.645 ************************************ 00:04:56.645 16:57:12 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:56.645 16:57:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:56.645 16:57:12 -- setup/hugepages.sh@169 -- # local node 00:04:56.645 16:57:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:56.645 16:57:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:56.645 16:57:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:56.645 16:57:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:56.645 16:57:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.645 16:57:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.645 16:57:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.645 16:57:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.645 16:57:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.645 16:57:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.645 16:57:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.646 16:57:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.646 16:57:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.646 16:57:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.646 16:57:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.646 16:57:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.646 16:57:12 -- setup/hugepages.sh@83 -- # : 256 00:04:56.646 16:57:12 -- setup/hugepages.sh@84 -- # : 1 00:04:56.646 16:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.646 16:57:12 -- setup/hugepages.sh@83 -- # : 0 00:04:56.646 16:57:12 -- setup/hugepages.sh@84 -- # : 0 00:04:56.646 16:57:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:56.646 16:57:12 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:56.646 16:57:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.646 16:57:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.646 16:57:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.646 16:57:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.646 16:57:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.646 16:57:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.646 16:57:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.646 16:57:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.646 16:57:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.646 16:57:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.646 16:57:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.646 16:57:12 -- setup/hugepages.sh@78 -- # return 0 00:04:56.646 16:57:12 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:56.646 16:57:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.646 16:57:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.646 16:57:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.646 16:57:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.646 16:57:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:56.646 16:57:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.646 16:57:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.646 16:57:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.646 16:57:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.646 16:57:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.646 16:57:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.646 16:57:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:56.646 16:57:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.646 16:57:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.646 16:57:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.646 16:57:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:56.646 16:57:12 -- setup/hugepages.sh@78 -- # return 0 00:04:56.646 16:57:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:56.646 16:57:12 -- setup/hugepages.sh@187 -- # setup output 00:04:56.646 16:57:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.646 16:57:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.578 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.578 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.578 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.578 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.578 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.578 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.578 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.578 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.578 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.578 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.578 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.578 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.578 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.578 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.578 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.578 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.578 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.840 16:57:13 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:57.840 16:57:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:57.840 16:57:13 -- setup/hugepages.sh@89 -- # local node 00:04:57.840 16:57:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.840 16:57:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.840 16:57:13 -- setup/hugepages.sh@92 -- # local surp 00:04:57.840 16:57:13 -- setup/hugepages.sh@93 -- # local resv 00:04:57.840 16:57:13 -- setup/hugepages.sh@94 -- # local anon 00:04:57.840 16:57:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.840 16:57:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.840 16:57:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.840 16:57:13 -- setup/common.sh@18 -- # local node= 00:04:57.840 16:57:13 -- setup/common.sh@19 -- # local var val 00:04:57.840 16:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.840 16:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.840 16:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.840 16:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.840 16:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.840 16:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42224708 kB' 'MemAvailable: 45739272 kB' 'Buffers: 2704 kB' 'Cached: 12774420 kB' 'SwapCached: 0 kB' 'Active: 9768676 kB' 'Inactive: 3508168 kB' 'Active(anon): 9373096 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502992 kB' 'Mapped: 211320 kB' 'Shmem: 8873376 kB' 'KReclaimable: 209308 kB' 'Slab: 599236 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389928 kB' 'KernelStack: 12896 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10534088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.840 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.840 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.841 16:57:13 -- setup/common.sh@33 -- # echo 0 00:04:57.841 16:57:13 -- setup/common.sh@33 -- # return 0 00:04:57.841 16:57:13 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.841 16:57:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.841 16:57:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.841 16:57:13 -- setup/common.sh@18 -- # local node= 00:04:57.841 16:57:13 -- setup/common.sh@19 -- # local var val 00:04:57.841 16:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.841 16:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.841 16:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.841 16:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.841 16:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.841 16:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42225340 kB' 'MemAvailable: 45739904 kB' 'Buffers: 2704 kB' 'Cached: 12774420 kB' 'SwapCached: 0 kB' 'Active: 9769060 kB' 'Inactive: 3508168 kB' 'Active(anon): 9373480 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503376 kB' 'Mapped: 211320 kB' 'Shmem: 8873376 kB' 'KReclaimable: 209308 kB' 'Slab: 599220 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389912 kB' 'KernelStack: 12848 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10531808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.841 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.841 16:57:13 -- setup/common.sh@33 -- # echo 0 00:04:57.841 16:57:13 -- setup/common.sh@33 -- # return 0 00:04:57.841 16:57:13 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.841 16:57:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.841 16:57:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.841 16:57:13 -- setup/common.sh@18 -- # local node= 00:04:57.841 16:57:13 -- setup/common.sh@19 -- # local var val 00:04:57.841 16:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.841 16:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.841 16:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.841 16:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.841 16:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.841 16:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.841 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42225528 kB' 'MemAvailable: 45740092 kB' 'Buffers: 2704 kB' 'Cached: 12774420 kB' 'SwapCached: 0 kB' 'Active: 9768980 kB' 'Inactive: 3508168 kB' 'Active(anon): 9373400 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503304 kB' 'Mapped: 211304 kB' 'Shmem: 8873376 kB' 'KReclaimable: 209308 kB' 'Slab: 599220 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389912 kB' 'KernelStack: 12928 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10531824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.842 16:57:13 -- setup/common.sh@33 -- # echo 0 00:04:57.842 16:57:13 -- setup/common.sh@33 -- # return 0 00:04:57.842 16:57:13 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.842 16:57:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:57.842 nr_hugepages=1536 00:04:57.842 16:57:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.842 resv_hugepages=0 00:04:57.842 16:57:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.842 surplus_hugepages=0 00:04:57.842 16:57:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.842 anon_hugepages=0 00:04:57.842 16:57:13 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:57.842 16:57:13 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:57.842 16:57:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.842 16:57:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.842 16:57:13 -- setup/common.sh@18 -- # local node= 00:04:57.842 16:57:13 -- setup/common.sh@19 -- # local var val 00:04:57.842 16:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.842 16:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.842 16:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.842 16:57:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.842 16:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.842 16:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42225912 kB' 'MemAvailable: 45740476 kB' 'Buffers: 2704 kB' 'Cached: 12774448 kB' 'SwapCached: 0 kB' 'Active: 9767868 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372288 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502128 kB' 'Mapped: 211232 kB' 'Shmem: 8873404 kB' 'KReclaimable: 209308 kB' 'Slab: 599196 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389888 kB' 'KernelStack: 12864 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10531836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.842 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.842 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.843 16:57:13 -- setup/common.sh@33 -- # echo 1536 00:04:57.843 16:57:13 -- setup/common.sh@33 -- # return 0 00:04:57.843 16:57:13 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:57.843 16:57:13 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.843 16:57:13 -- setup/hugepages.sh@27 -- # local node 00:04:57.843 16:57:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.843 16:57:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.843 16:57:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.843 16:57:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.843 16:57:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.843 16:57:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.843 16:57:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.843 16:57:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.843 16:57:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.843 16:57:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.843 16:57:13 -- setup/common.sh@18 -- # local node=0 00:04:57.843 16:57:13 -- setup/common.sh@19 -- # local var val 00:04:57.843 16:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.843 16:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.843 16:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.843 16:57:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.843 16:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.843 16:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21421896 kB' 'MemUsed: 11455044 kB' 'SwapCached: 0 kB' 'Active: 5913224 kB' 'Inactive: 3324284 kB' 'Active(anon): 5654212 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976372 kB' 'Mapped: 124944 kB' 'AnonPages: 264336 kB' 'Shmem: 5393076 kB' 'KernelStack: 5992 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327028 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.843 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.843 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@33 -- # echo 0 00:04:57.844 16:57:13 -- setup/common.sh@33 -- # return 0 00:04:57.844 16:57:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.844 16:57:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.844 16:57:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.844 16:57:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:57.844 16:57:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.844 16:57:13 -- setup/common.sh@18 -- # local node=1 00:04:57.844 16:57:13 -- setup/common.sh@19 -- # local var val 00:04:57.844 16:57:13 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.844 16:57:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.844 16:57:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.844 16:57:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.844 16:57:13 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.844 16:57:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 20803764 kB' 'MemUsed: 6861016 kB' 'SwapCached: 0 kB' 'Active: 3854652 kB' 'Inactive: 183884 kB' 'Active(anon): 3718084 kB' 'Inactive(anon): 0 kB' 'Active(file): 136568 kB' 'Inactive(file): 183884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3800796 kB' 'Mapped: 86288 kB' 'AnonPages: 237788 kB' 'Shmem: 3480344 kB' 'KernelStack: 6872 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97516 kB' 'Slab: 272168 kB' 'SReclaimable: 97516 kB' 'SUnreclaim: 174652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # continue 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.844 16:57:13 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.844 16:57:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.844 16:57:13 -- setup/common.sh@33 -- # echo 0 00:04:57.844 16:57:13 -- setup/common.sh@33 -- # return 0 00:04:57.844 16:57:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.844 16:57:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.844 16:57:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.844 16:57:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.844 16:57:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:57.844 node0=512 expecting 512 00:04:58.102 16:57:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.102 16:57:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.102 16:57:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.102 16:57:13 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:58.102 node1=1024 expecting 1024 00:04:58.102 16:57:13 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:58.102 00:04:58.102 real 0m1.444s 00:04:58.102 user 0m0.580s 00:04:58.102 sys 0m0.830s 00:04:58.102 16:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.102 16:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:58.102 ************************************ 00:04:58.102 END TEST custom_alloc 00:04:58.102 ************************************ 00:04:58.102 16:57:14 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:58.102 16:57:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.102 16:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.102 16:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.102 ************************************ 00:04:58.102 START TEST no_shrink_alloc 00:04:58.102 ************************************ 00:04:58.102 16:57:14 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:58.102 16:57:14 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:58.102 16:57:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:58.102 16:57:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:58.102 16:57:14 -- setup/hugepages.sh@51 -- # shift 00:04:58.102 16:57:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:58.102 16:57:14 -- setup/hugepages.sh@52 -- # local node_ids 00:04:58.102 16:57:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:58.102 16:57:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:58.102 16:57:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:58.102 16:57:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:58.102 16:57:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:58.102 16:57:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:58.102 16:57:14 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:58.102 16:57:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:58.102 16:57:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:58.102 16:57:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:58.102 16:57:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:58.102 16:57:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:58.102 16:57:14 -- setup/hugepages.sh@73 -- # return 0 00:04:58.102 16:57:14 -- setup/hugepages.sh@198 -- # setup output 00:04:58.102 16:57:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.102 16:57:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.054 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.054 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.054 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.054 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.054 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.054 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.054 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.054 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.054 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.054 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.054 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.054 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.054 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.054 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.054 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.054 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.054 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.314 16:57:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:59.314 16:57:15 -- setup/hugepages.sh@89 -- # local node 00:04:59.314 16:57:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.314 16:57:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.314 16:57:15 -- setup/hugepages.sh@92 -- # local surp 00:04:59.314 16:57:15 -- setup/hugepages.sh@93 -- # local resv 00:04:59.314 16:57:15 -- setup/hugepages.sh@94 -- # local anon 00:04:59.314 16:57:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.314 16:57:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.314 16:57:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.314 16:57:15 -- setup/common.sh@18 -- # local node= 00:04:59.314 16:57:15 -- setup/common.sh@19 -- # local var val 00:04:59.314 16:57:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.314 16:57:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.314 16:57:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.314 16:57:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.314 16:57:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.314 16:57:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43272396 kB' 'MemAvailable: 46786960 kB' 'Buffers: 2704 kB' 'Cached: 12774504 kB' 'SwapCached: 0 kB' 'Active: 9768244 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372664 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502392 kB' 'Mapped: 211244 kB' 'Shmem: 8873460 kB' 'KReclaimable: 209308 kB' 'Slab: 599320 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390012 kB' 'KernelStack: 12864 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.314 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.314 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.315 16:57:15 -- setup/common.sh@33 -- # echo 0 00:04:59.315 16:57:15 -- setup/common.sh@33 -- # return 0 00:04:59.315 16:57:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:59.315 16:57:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.315 16:57:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.315 16:57:15 -- setup/common.sh@18 -- # local node= 00:04:59.315 16:57:15 -- setup/common.sh@19 -- # local var val 00:04:59.315 16:57:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.315 16:57:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.315 16:57:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.315 16:57:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.315 16:57:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.315 16:57:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43274140 kB' 'MemAvailable: 46788704 kB' 'Buffers: 2704 kB' 'Cached: 12774508 kB' 'SwapCached: 0 kB' 'Active: 9768576 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372996 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502728 kB' 'Mapped: 211244 kB' 'Shmem: 8873464 kB' 'KReclaimable: 209308 kB' 'Slab: 599320 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390012 kB' 'KernelStack: 12864 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.315 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.315 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.316 16:57:15 -- setup/common.sh@33 -- # echo 0 00:04:59.316 16:57:15 -- setup/common.sh@33 -- # return 0 00:04:59.316 16:57:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:59.316 16:57:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.316 16:57:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.316 16:57:15 -- setup/common.sh@18 -- # local node= 00:04:59.316 16:57:15 -- setup/common.sh@19 -- # local var val 00:04:59.316 16:57:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.316 16:57:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.316 16:57:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.316 16:57:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.316 16:57:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.316 16:57:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43273636 kB' 'MemAvailable: 46788200 kB' 'Buffers: 2704 kB' 'Cached: 12774508 kB' 'SwapCached: 0 kB' 'Active: 9768364 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372784 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502508 kB' 'Mapped: 211240 kB' 'Shmem: 8873464 kB' 'KReclaimable: 209308 kB' 'Slab: 599320 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390012 kB' 'KernelStack: 12912 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.316 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.316 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.317 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.317 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.317 16:57:15 -- setup/common.sh@33 -- # echo 0 00:04:59.318 16:57:15 -- setup/common.sh@33 -- # return 0 00:04:59.318 16:57:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.318 16:57:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.318 nr_hugepages=1024 00:04:59.318 16:57:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.318 resv_hugepages=0 00:04:59.318 16:57:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.318 surplus_hugepages=0 00:04:59.318 16:57:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.318 anon_hugepages=0 00:04:59.318 16:57:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.318 16:57:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.318 16:57:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.318 16:57:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.318 16:57:15 -- setup/common.sh@18 -- # local node= 00:04:59.318 16:57:15 -- setup/common.sh@19 -- # local var val 00:04:59.318 16:57:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.318 16:57:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.318 16:57:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.318 16:57:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.318 16:57:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.318 16:57:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43273924 kB' 'MemAvailable: 46788488 kB' 'Buffers: 2704 kB' 'Cached: 12774520 kB' 'SwapCached: 0 kB' 'Active: 9767828 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372248 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501948 kB' 'Mapped: 211240 kB' 'Shmem: 8873476 kB' 'KReclaimable: 209308 kB' 'Slab: 599344 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 390036 kB' 'KernelStack: 12896 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.318 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.318 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.319 16:57:15 -- setup/common.sh@33 -- # echo 1024 00:04:59.319 16:57:15 -- setup/common.sh@33 -- # return 0 00:04:59.319 16:57:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.319 16:57:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.319 16:57:15 -- setup/hugepages.sh@27 -- # local node 00:04:59.319 16:57:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.319 16:57:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.319 16:57:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.319 16:57:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.319 16:57:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.319 16:57:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.319 16:57:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.319 16:57:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.319 16:57:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.319 16:57:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.319 16:57:15 -- setup/common.sh@18 -- # local node=0 00:04:59.319 16:57:15 -- setup/common.sh@19 -- # local var val 00:04:59.319 16:57:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.319 16:57:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.319 16:57:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.319 16:57:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.319 16:57:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.319 16:57:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20362020 kB' 'MemUsed: 12514920 kB' 'SwapCached: 0 kB' 'Active: 5913232 kB' 'Inactive: 3324284 kB' 'Active(anon): 5654220 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976452 kB' 'Mapped: 124944 kB' 'AnonPages: 264248 kB' 'Shmem: 5393156 kB' 'KernelStack: 6008 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327104 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.319 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.319 16:57:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # continue 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.320 16:57:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.320 16:57:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.320 16:57:15 -- setup/common.sh@33 -- # echo 0 00:04:59.320 16:57:15 -- setup/common.sh@33 -- # return 0 00:04:59.320 16:57:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.320 16:57:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.320 16:57:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.320 16:57:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.320 16:57:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:59.320 node0=1024 expecting 1024 00:04:59.320 16:57:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:59.320 16:57:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:59.320 16:57:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:59.320 16:57:15 -- setup/hugepages.sh@202 -- # setup output 00:04:59.320 16:57:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.320 16:57:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:00.693 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.693 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.693 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.693 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.693 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.693 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.693 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.693 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.693 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.693 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.693 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.693 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.693 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.693 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.693 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.693 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.693 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.693 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:00.693 16:57:16 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:00.693 16:57:16 -- setup/hugepages.sh@89 -- # local node 00:05:00.693 16:57:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.693 16:57:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.693 16:57:16 -- setup/hugepages.sh@92 -- # local surp 00:05:00.693 16:57:16 -- setup/hugepages.sh@93 -- # local resv 00:05:00.693 16:57:16 -- setup/hugepages.sh@94 -- # local anon 00:05:00.693 16:57:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.693 16:57:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.693 16:57:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.693 16:57:16 -- setup/common.sh@18 -- # local node= 00:05:00.693 16:57:16 -- setup/common.sh@19 -- # local var val 00:05:00.693 16:57:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.693 16:57:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.693 16:57:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.693 16:57:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.693 16:57:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.693 16:57:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.693 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.693 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.693 16:57:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43258648 kB' 'MemAvailable: 46773212 kB' 'Buffers: 2704 kB' 'Cached: 12774588 kB' 'SwapCached: 0 kB' 'Active: 9768608 kB' 'Inactive: 3508168 kB' 'Active(anon): 9373028 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502760 kB' 'Mapped: 211252 kB' 'Shmem: 8873544 kB' 'KReclaimable: 209308 kB' 'Slab: 599136 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389828 kB' 'KernelStack: 12880 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197256 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:05:00.693 16:57:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.694 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.694 16:57:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.694 16:57:16 -- setup/common.sh@33 -- # echo 0 00:05:00.694 16:57:16 -- setup/common.sh@33 -- # return 0 00:05:00.694 16:57:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.694 16:57:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.694 16:57:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.694 16:57:16 -- setup/common.sh@18 -- # local node= 00:05:00.694 16:57:16 -- setup/common.sh@19 -- # local var val 00:05:00.694 16:57:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.694 16:57:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.694 16:57:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.695 16:57:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.695 16:57:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.695 16:57:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43262928 kB' 'MemAvailable: 46777492 kB' 'Buffers: 2704 kB' 'Cached: 12774588 kB' 'SwapCached: 0 kB' 'Active: 9768844 kB' 'Inactive: 3508168 kB' 'Active(anon): 9373264 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502960 kB' 'Mapped: 211256 kB' 'Shmem: 8873544 kB' 'KReclaimable: 209308 kB' 'Slab: 599104 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389796 kB' 'KernelStack: 12832 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197224 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.695 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.695 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.696 16:57:16 -- setup/common.sh@33 -- # echo 0 00:05:00.696 16:57:16 -- setup/common.sh@33 -- # return 0 00:05:00.696 16:57:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.696 16:57:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.696 16:57:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.696 16:57:16 -- setup/common.sh@18 -- # local node= 00:05:00.696 16:57:16 -- setup/common.sh@19 -- # local var val 00:05:00.696 16:57:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.696 16:57:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.696 16:57:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.696 16:57:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.696 16:57:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.696 16:57:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43263332 kB' 'MemAvailable: 46777896 kB' 'Buffers: 2704 kB' 'Cached: 12774592 kB' 'SwapCached: 0 kB' 'Active: 9768388 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372808 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502536 kB' 'Mapped: 211248 kB' 'Shmem: 8873548 kB' 'KReclaimable: 209308 kB' 'Slab: 599216 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389908 kB' 'KernelStack: 12896 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.696 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.696 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.697 16:57:16 -- setup/common.sh@33 -- # echo 0 00:05:00.697 16:57:16 -- setup/common.sh@33 -- # return 0 00:05:00.697 16:57:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.697 16:57:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.697 nr_hugepages=1024 00:05:00.697 16:57:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.697 resv_hugepages=0 00:05:00.697 16:57:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.697 surplus_hugepages=0 00:05:00.697 16:57:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.697 anon_hugepages=0 00:05:00.697 16:57:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.697 16:57:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.697 16:57:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.697 16:57:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.697 16:57:16 -- setup/common.sh@18 -- # local node= 00:05:00.697 16:57:16 -- setup/common.sh@19 -- # local var val 00:05:00.697 16:57:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.697 16:57:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.697 16:57:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.697 16:57:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.697 16:57:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.697 16:57:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43263632 kB' 'MemAvailable: 46778196 kB' 'Buffers: 2704 kB' 'Cached: 12774616 kB' 'SwapCached: 0 kB' 'Active: 9768140 kB' 'Inactive: 3508168 kB' 'Active(anon): 9372560 kB' 'Inactive(anon): 0 kB' 'Active(file): 395580 kB' 'Inactive(file): 3508168 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502260 kB' 'Mapped: 211248 kB' 'Shmem: 8873572 kB' 'KReclaimable: 209308 kB' 'Slab: 599216 kB' 'SReclaimable: 209308 kB' 'SUnreclaim: 389908 kB' 'KernelStack: 12880 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10532392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 42048 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2287196 kB' 'DirectMap2M: 16506880 kB' 'DirectMap1G: 50331648 kB' 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.697 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.697 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.698 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.698 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.699 16:57:16 -- setup/common.sh@33 -- # echo 1024 00:05:00.699 16:57:16 -- setup/common.sh@33 -- # return 0 00:05:00.699 16:57:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.699 16:57:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.699 16:57:16 -- setup/hugepages.sh@27 -- # local node 00:05:00.699 16:57:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.699 16:57:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.699 16:57:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.699 16:57:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.699 16:57:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.699 16:57:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.699 16:57:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.699 16:57:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.699 16:57:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.699 16:57:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.699 16:57:16 -- setup/common.sh@18 -- # local node=0 00:05:00.699 16:57:16 -- setup/common.sh@19 -- # local var val 00:05:00.699 16:57:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.699 16:57:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.699 16:57:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.699 16:57:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.699 16:57:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.699 16:57:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20351752 kB' 'MemUsed: 12525188 kB' 'SwapCached: 0 kB' 'Active: 5912836 kB' 'Inactive: 3324284 kB' 'Active(anon): 5653824 kB' 'Inactive(anon): 0 kB' 'Active(file): 259012 kB' 'Inactive(file): 3324284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8976524 kB' 'Mapped: 124944 kB' 'AnonPages: 263772 kB' 'Shmem: 5393228 kB' 'KernelStack: 5960 kB' 'PageTables: 3088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111792 kB' 'Slab: 327040 kB' 'SReclaimable: 111792 kB' 'SUnreclaim: 215248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.699 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.699 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # continue 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.700 16:57:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.700 16:57:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.700 16:57:16 -- setup/common.sh@33 -- # echo 0 00:05:00.700 16:57:16 -- setup/common.sh@33 -- # return 0 00:05:00.700 16:57:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.700 16:57:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.700 16:57:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.700 16:57:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.700 16:57:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.700 node0=1024 expecting 1024 00:05:00.700 16:57:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.700 00:05:00.700 real 0m2.761s 00:05:00.700 user 0m1.131s 00:05:00.700 sys 0m1.559s 00:05:00.700 16:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.700 16:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.700 ************************************ 00:05:00.700 END TEST no_shrink_alloc 00:05:00.700 ************************************ 00:05:00.700 16:57:16 -- setup/hugepages.sh@217 -- # clear_hp 00:05:00.700 16:57:16 -- setup/hugepages.sh@37 -- # local node hp 00:05:00.700 16:57:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.700 16:57:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.700 16:57:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.700 16:57:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.700 16:57:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.700 16:57:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.700 16:57:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.700 16:57:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.700 16:57:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.700 16:57:16 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.700 16:57:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.700 16:57:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.700 00:05:00.700 real 0m11.145s 00:05:00.700 user 0m4.175s 00:05:00.700 sys 0m5.798s 00:05:00.700 16:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.700 16:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.700 ************************************ 00:05:00.700 END TEST hugepages 00:05:00.700 ************************************ 00:05:00.700 16:57:16 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.700 16:57:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.700 16:57:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.700 16:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:00.700 ************************************ 00:05:00.700 START TEST driver 00:05:00.700 ************************************ 00:05:00.700 16:57:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.957 * Looking for test storage... 00:05:00.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.957 16:57:16 -- setup/driver.sh@68 -- # setup reset 00:05:00.957 16:57:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.957 16:57:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.480 16:57:19 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:03.480 16:57:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.480 16:57:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.480 16:57:19 -- common/autotest_common.sh@10 -- # set +x 00:05:03.480 ************************************ 00:05:03.480 START TEST guess_driver 00:05:03.480 ************************************ 00:05:03.480 16:57:19 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:03.480 16:57:19 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:03.480 16:57:19 -- setup/driver.sh@47 -- # local fail=0 00:05:03.480 16:57:19 -- setup/driver.sh@49 -- # pick_driver 00:05:03.480 16:57:19 -- setup/driver.sh@36 -- # vfio 00:05:03.480 16:57:19 -- setup/driver.sh@21 -- # local iommu_grups 00:05:03.480 16:57:19 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:03.480 16:57:19 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:03.480 16:57:19 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:03.480 16:57:19 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:03.480 16:57:19 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:03.480 16:57:19 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:03.480 16:57:19 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:03.480 16:57:19 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:03.480 16:57:19 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:03.480 16:57:19 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:03.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:03.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:03.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:03.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:03.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:03.481 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:03.481 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:03.481 16:57:19 -- setup/driver.sh@30 -- # return 0 00:05:03.481 16:57:19 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:03.481 16:57:19 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:03.481 16:57:19 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:03.481 16:57:19 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:03.481 Looking for driver=vfio-pci 00:05:03.481 16:57:19 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:03.481 16:57:19 -- setup/driver.sh@45 -- # setup output config 00:05:03.481 16:57:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.481 16:57:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.415 16:57:20 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.415 16:57:20 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.415 16:57:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.351 16:57:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.351 16:57:21 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.351 16:57:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.351 16:57:21 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:05.351 16:57:21 -- setup/driver.sh@65 -- # setup reset 00:05:05.351 16:57:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.351 16:57:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.881 00:05:07.881 real 0m4.612s 00:05:07.881 user 0m1.025s 00:05:07.881 sys 0m1.739s 00:05:07.881 16:57:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.881 16:57:23 -- common/autotest_common.sh@10 -- # set +x 00:05:07.881 ************************************ 00:05:07.881 END TEST guess_driver 00:05:07.881 ************************************ 00:05:07.881 00:05:07.881 real 0m7.041s 00:05:07.881 user 0m1.597s 00:05:07.881 sys 0m2.745s 00:05:07.881 16:57:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.881 16:57:23 -- common/autotest_common.sh@10 -- # set +x 00:05:07.881 ************************************ 00:05:07.881 END TEST driver 00:05:07.881 ************************************ 00:05:07.881 16:57:23 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.881 16:57:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.881 16:57:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.881 16:57:23 -- common/autotest_common.sh@10 -- # set +x 00:05:07.881 ************************************ 00:05:07.881 START TEST devices 00:05:07.881 ************************************ 00:05:07.881 16:57:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.881 * Looking for test storage... 00:05:07.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.881 16:57:23 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:07.881 16:57:23 -- setup/devices.sh@192 -- # setup reset 00:05:07.881 16:57:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.881 16:57:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.252 16:57:25 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:09.252 16:57:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:09.252 16:57:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:09.252 16:57:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:09.252 16:57:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:09.252 16:57:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:09.252 16:57:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:09.252 16:57:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.252 16:57:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:09.252 16:57:25 -- setup/devices.sh@196 -- # blocks=() 00:05:09.252 16:57:25 -- setup/devices.sh@196 -- # declare -a blocks 00:05:09.252 16:57:25 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:09.252 16:57:25 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:09.252 16:57:25 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:09.252 16:57:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:09.252 16:57:25 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:09.252 16:57:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:09.252 16:57:25 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:09.252 16:57:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:09.252 16:57:25 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:09.252 16:57:25 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:09.252 16:57:25 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:09.252 No valid GPT data, bailing 00:05:09.252 16:57:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.509 16:57:25 -- scripts/common.sh@393 -- # pt= 00:05:09.509 16:57:25 -- scripts/common.sh@394 -- # return 1 00:05:09.509 16:57:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:09.509 16:57:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:09.509 16:57:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:09.509 16:57:25 -- setup/common.sh@80 -- # echo 1000204886016 00:05:09.509 16:57:25 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:09.509 16:57:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:09.509 16:57:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:09.509 16:57:25 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:09.509 16:57:25 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:09.509 16:57:25 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:09.509 16:57:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.509 16:57:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.509 16:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:09.509 ************************************ 00:05:09.509 START TEST nvme_mount 00:05:09.509 ************************************ 00:05:09.509 16:57:25 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:09.509 16:57:25 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:09.509 16:57:25 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:09.509 16:57:25 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.509 16:57:25 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.509 16:57:25 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:09.509 16:57:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.509 16:57:25 -- setup/common.sh@40 -- # local part_no=1 00:05:09.509 16:57:25 -- setup/common.sh@41 -- # local size=1073741824 00:05:09.509 16:57:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.509 16:57:25 -- setup/common.sh@44 -- # parts=() 00:05:09.509 16:57:25 -- setup/common.sh@44 -- # local parts 00:05:09.509 16:57:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.509 16:57:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.509 16:57:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.509 16:57:25 -- setup/common.sh@46 -- # (( part++ )) 00:05:09.509 16:57:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.509 16:57:25 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.509 16:57:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.509 16:57:25 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:10.442 Creating new GPT entries in memory. 00:05:10.442 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.442 other utilities. 00:05:10.442 16:57:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.442 16:57:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.442 16:57:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.442 16:57:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.442 16:57:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:11.374 Creating new GPT entries in memory. 00:05:11.374 The operation has completed successfully. 00:05:11.374 16:57:27 -- setup/common.sh@57 -- # (( part++ )) 00:05:11.374 16:57:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.374 16:57:27 -- setup/common.sh@62 -- # wait 406577 00:05:11.375 16:57:27 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.375 16:57:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:11.375 16:57:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.375 16:57:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:11.375 16:57:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:11.375 16:57:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.375 16:57:27 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.375 16:57:27 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:11.375 16:57:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:11.375 16:57:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.375 16:57:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.375 16:57:27 -- setup/devices.sh@53 -- # local found=0 00:05:11.375 16:57:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.375 16:57:27 -- setup/devices.sh@56 -- # : 00:05:11.375 16:57:27 -- setup/devices.sh@59 -- # local pci status 00:05:11.375 16:57:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.375 16:57:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:11.375 16:57:27 -- setup/devices.sh@47 -- # setup output config 00:05:11.375 16:57:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.375 16:57:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:12.750 16:57:28 -- setup/devices.sh@63 -- # found=1 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.750 16:57:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.750 16:57:28 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.750 16:57:28 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.750 16:57:28 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.750 16:57:28 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.750 16:57:28 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:12.750 16:57:28 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.750 16:57:28 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.750 16:57:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.750 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.750 16:57:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.750 16:57:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.008 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:13.008 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:13.008 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.008 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.008 16:57:29 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:13.008 16:57:29 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:13.008 16:57:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.008 16:57:29 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:13.008 16:57:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:13.267 16:57:29 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.267 16:57:29 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.267 16:57:29 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.267 16:57:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:13.267 16:57:29 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.267 16:57:29 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.267 16:57:29 -- setup/devices.sh@53 -- # local found=0 00:05:13.267 16:57:29 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.267 16:57:29 -- setup/devices.sh@56 -- # : 00:05:13.267 16:57:29 -- setup/devices.sh@59 -- # local pci status 00:05:13.267 16:57:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.267 16:57:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.267 16:57:29 -- setup/devices.sh@47 -- # setup output config 00:05:13.267 16:57:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.267 16:57:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.203 16:57:30 -- setup/devices.sh@63 -- # found=1 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.203 16:57:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.203 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.462 16:57:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.462 16:57:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.462 16:57:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.462 16:57:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.462 16:57:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.462 16:57:30 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.462 16:57:30 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:14.462 16:57:30 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:14.462 16:57:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:14.462 16:57:30 -- setup/devices.sh@50 -- # local mount_point= 00:05:14.462 16:57:30 -- setup/devices.sh@51 -- # local test_file= 00:05:14.462 16:57:30 -- setup/devices.sh@53 -- # local found=0 00:05:14.462 16:57:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.462 16:57:30 -- setup/devices.sh@59 -- # local pci status 00:05:14.462 16:57:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.462 16:57:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:14.462 16:57:30 -- setup/devices.sh@47 -- # setup output config 00:05:14.462 16:57:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.462 16:57:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.394 16:57:31 -- setup/devices.sh@63 -- # found=1 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.394 16:57:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.394 16:57:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.651 16:57:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.651 16:57:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.651 16:57:31 -- setup/devices.sh@68 -- # return 0 00:05:15.651 16:57:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:15.651 16:57:31 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.651 16:57:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.651 16:57:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.651 16:57:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.651 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.651 00:05:15.651 real 0m6.230s 00:05:15.651 user 0m1.483s 00:05:15.651 sys 0m2.361s 00:05:15.652 16:57:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.652 16:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.652 ************************************ 00:05:15.652 END TEST nvme_mount 00:05:15.652 ************************************ 00:05:15.652 16:57:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:15.652 16:57:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.652 16:57:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.652 16:57:31 -- common/autotest_common.sh@10 -- # set +x 00:05:15.652 ************************************ 00:05:15.652 START TEST dm_mount 00:05:15.652 ************************************ 00:05:15.652 16:57:31 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:15.652 16:57:31 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:15.652 16:57:31 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:15.652 16:57:31 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:15.652 16:57:31 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:15.652 16:57:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.652 16:57:31 -- setup/common.sh@40 -- # local part_no=2 00:05:15.652 16:57:31 -- setup/common.sh@41 -- # local size=1073741824 00:05:15.652 16:57:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.652 16:57:31 -- setup/common.sh@44 -- # parts=() 00:05:15.652 16:57:31 -- setup/common.sh@44 -- # local parts 00:05:15.652 16:57:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.652 16:57:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.652 16:57:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.652 16:57:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:15.652 16:57:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.652 16:57:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.652 16:57:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:15.652 16:57:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.652 16:57:31 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:15.652 16:57:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.652 16:57:31 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:16.584 Creating new GPT entries in memory. 00:05:16.584 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.584 other utilities. 00:05:16.584 16:57:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.584 16:57:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.584 16:57:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.584 16:57:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.584 16:57:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:17.954 Creating new GPT entries in memory. 00:05:17.954 The operation has completed successfully. 00:05:17.954 16:57:33 -- setup/common.sh@57 -- # (( part++ )) 00:05:17.954 16:57:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.954 16:57:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.954 16:57:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.954 16:57:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:18.887 The operation has completed successfully. 00:05:18.887 16:57:34 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.887 16:57:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.887 16:57:34 -- setup/common.sh@62 -- # wait 408985 00:05:18.887 16:57:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:18.887 16:57:34 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.887 16:57:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.887 16:57:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:18.887 16:57:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:18.887 16:57:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.887 16:57:34 -- setup/devices.sh@161 -- # break 00:05:18.887 16:57:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.887 16:57:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:18.887 16:57:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:18.887 16:57:34 -- setup/devices.sh@166 -- # dm=dm-0 00:05:18.887 16:57:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:18.887 16:57:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:18.887 16:57:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.887 16:57:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:18.887 16:57:34 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.887 16:57:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.887 16:57:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:18.887 16:57:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.887 16:57:34 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.887 16:57:34 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.887 16:57:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:18.887 16:57:34 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.887 16:57:34 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.887 16:57:34 -- setup/devices.sh@53 -- # local found=0 00:05:18.887 16:57:34 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.887 16:57:34 -- setup/devices.sh@56 -- # : 00:05:18.887 16:57:34 -- setup/devices.sh@59 -- # local pci status 00:05:18.887 16:57:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.887 16:57:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.887 16:57:34 -- setup/devices.sh@47 -- # setup output config 00:05:18.887 16:57:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.887 16:57:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:19.819 16:57:35 -- setup/devices.sh@63 -- # found=1 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.819 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.819 16:57:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.819 16:57:35 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:19.819 16:57:35 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:19.819 16:57:35 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:19.819 16:57:35 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.077 16:57:35 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.077 16:57:35 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:20.077 16:57:35 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:20.077 16:57:35 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:20.077 16:57:35 -- setup/devices.sh@50 -- # local mount_point= 00:05:20.077 16:57:35 -- setup/devices.sh@51 -- # local test_file= 00:05:20.077 16:57:35 -- setup/devices.sh@53 -- # local found=0 00:05:20.077 16:57:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.077 16:57:35 -- setup/devices.sh@59 -- # local pci status 00:05:20.077 16:57:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.077 16:57:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:20.077 16:57:35 -- setup/devices.sh@47 -- # setup output config 00:05:20.077 16:57:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.077 16:57:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:21.011 16:57:37 -- setup/devices.sh@63 -- # found=1 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.011 16:57:37 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.011 16:57:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.269 16:57:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.269 16:57:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:21.269 16:57:37 -- setup/devices.sh@68 -- # return 0 00:05:21.269 16:57:37 -- setup/devices.sh@187 -- # cleanup_dm 00:05:21.269 16:57:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.269 16:57:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.269 16:57:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:21.269 16:57:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.269 16:57:37 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:21.269 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.269 16:57:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.269 16:57:37 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:21.269 00:05:21.269 real 0m5.657s 00:05:21.269 user 0m0.991s 00:05:21.269 sys 0m1.571s 00:05:21.269 16:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.269 16:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.269 ************************************ 00:05:21.269 END TEST dm_mount 00:05:21.269 ************************************ 00:05:21.269 16:57:37 -- setup/devices.sh@1 -- # cleanup 00:05:21.269 16:57:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:21.269 16:57:37 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.269 16:57:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.269 16:57:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:21.269 16:57:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.269 16:57:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.528 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:21.528 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:21.528 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.528 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.528 16:57:37 -- setup/devices.sh@12 -- # cleanup_dm 00:05:21.528 16:57:37 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.528 16:57:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.528 16:57:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.528 16:57:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.528 16:57:37 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.528 16:57:37 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:21.528 00:05:21.528 real 0m13.738s 00:05:21.528 user 0m3.084s 00:05:21.528 sys 0m4.931s 00:05:21.528 16:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.528 16:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.528 ************************************ 00:05:21.528 END TEST devices 00:05:21.528 ************************************ 00:05:21.528 00:05:21.528 real 0m42.089s 00:05:21.528 user 0m12.049s 00:05:21.528 sys 0m18.701s 00:05:21.528 16:57:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.528 16:57:37 -- common/autotest_common.sh@10 -- # set +x 00:05:21.528 ************************************ 00:05:21.528 END TEST setup.sh 00:05:21.528 ************************************ 00:05:21.528 16:57:37 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:22.903 Hugepages 00:05:22.903 node hugesize free / total 00:05:22.903 node0 1048576kB 0 / 0 00:05:22.903 node0 2048kB 2048 / 2048 00:05:22.903 node1 1048576kB 0 / 0 00:05:22.903 node1 2048kB 0 / 0 00:05:22.903 00:05:22.903 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.903 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:22.903 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:22.903 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:22.903 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:22.903 16:57:38 -- spdk/autotest.sh@141 -- # uname -s 00:05:22.903 16:57:38 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:22.903 16:57:38 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:22.903 16:57:38 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.837 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.837 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.837 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:25.211 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:25.211 16:57:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:26.145 16:57:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:26.145 16:57:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:26.145 16:57:42 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.145 16:57:42 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:26.145 16:57:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.145 16:57:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.145 16:57:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.145 16:57:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.145 16:57:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:26.145 16:57:42 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:26.145 16:57:42 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:26.145 16:57:42 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.516 Waiting for block devices as requested 00:05:27.516 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:27.516 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:27.516 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:27.516 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:27.516 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:27.516 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:27.773 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:27.773 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:27.773 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:27.773 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:28.075 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:28.075 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:28.075 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:28.075 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:28.332 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:28.332 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:28.332 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:28.332 16:57:44 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:28.332 16:57:44 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:28.332 16:57:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:28.589 16:57:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:28.589 16:57:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:28.589 16:57:44 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:28.589 16:57:44 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:28.589 16:57:44 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:28.589 16:57:44 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:28.589 16:57:44 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:28.589 16:57:44 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:28.589 16:57:44 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:28.589 16:57:44 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:28.589 16:57:44 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:28.589 16:57:44 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:28.589 16:57:44 -- common/autotest_common.sh@1542 -- # continue 00:05:28.589 16:57:44 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:28.589 16:57:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.589 16:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.589 16:57:44 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:28.589 16:57:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.589 16:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:28.589 16:57:44 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:29.964 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.964 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:29.964 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.897 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:30.897 16:57:46 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:30.897 16:57:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.897 16:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.897 16:57:46 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:30.897 16:57:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:30.897 16:57:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:30.897 16:57:46 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:30.897 16:57:46 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:30.897 16:57:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:30.897 16:57:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:30.897 16:57:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:30.897 16:57:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.897 16:57:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:30.897 16:57:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:30.897 16:57:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:30.897 16:57:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:30.897 16:57:46 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:30.897 16:57:46 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:30.897 16:57:46 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:30.897 16:57:46 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:30.897 16:57:46 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:30.897 16:57:46 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:30.897 16:57:46 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:30.897 16:57:46 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=414334 00:05:30.897 16:57:46 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.897 16:57:46 -- common/autotest_common.sh@1583 -- # waitforlisten 414334 00:05:30.897 16:57:46 -- common/autotest_common.sh@819 -- # '[' -z 414334 ']' 00:05:30.897 16:57:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.897 16:57:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.897 16:57:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.897 16:57:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.897 16:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.897 [2024-07-20 16:57:47.039281] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:30.897 [2024-07-20 16:57:47.039366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid414334 ] 00:05:31.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.156 [2024-07-20 16:57:47.101934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.156 [2024-07-20 16:57:47.195699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.156 [2024-07-20 16:57:47.195899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.091 16:57:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.091 16:57:48 -- common/autotest_common.sh@852 -- # return 0 00:05:32.091 16:57:48 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:32.091 16:57:48 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:32.091 16:57:48 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:35.372 nvme0n1 00:05:35.372 16:57:51 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:35.372 [2024-07-20 16:57:51.286953] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:35.372 [2024-07-20 16:57:51.286998] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:35.372 request: 00:05:35.372 { 00:05:35.372 "nvme_ctrlr_name": "nvme0", 00:05:35.372 "password": "test", 00:05:35.372 "method": "bdev_nvme_opal_revert", 00:05:35.372 "req_id": 1 00:05:35.372 } 00:05:35.372 Got JSON-RPC error response 00:05:35.372 response: 00:05:35.372 { 00:05:35.372 "code": -32603, 00:05:35.372 "message": "Internal error" 00:05:35.372 } 00:05:35.372 16:57:51 -- common/autotest_common.sh@1589 -- # true 00:05:35.372 16:57:51 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:35.372 16:57:51 -- common/autotest_common.sh@1593 -- # killprocess 414334 00:05:35.372 16:57:51 -- common/autotest_common.sh@926 -- # '[' -z 414334 ']' 00:05:35.372 16:57:51 -- common/autotest_common.sh@930 -- # kill -0 414334 00:05:35.372 16:57:51 -- common/autotest_common.sh@931 -- # uname 00:05:35.372 16:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.372 16:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 414334 00:05:35.372 16:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.372 16:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.372 16:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 414334' 00:05:35.372 killing process with pid 414334 00:05:35.372 16:57:51 -- common/autotest_common.sh@945 -- # kill 414334 00:05:35.372 16:57:51 -- common/autotest_common.sh@950 -- # wait 414334 00:05:37.285 16:57:53 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:37.285 16:57:53 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:37.285 16:57:53 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:37.285 16:57:53 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:37.285 16:57:53 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:37.285 16:57:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.285 16:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.285 16:57:53 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:37.285 16:57:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.285 16:57:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.285 16:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.285 ************************************ 00:05:37.285 START TEST env 00:05:37.285 ************************************ 00:05:37.285 16:57:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:37.285 * Looking for test storage... 00:05:37.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:37.285 16:57:53 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.285 16:57:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.285 16:57:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.285 16:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.285 ************************************ 00:05:37.285 START TEST env_memory 00:05:37.285 ************************************ 00:05:37.285 16:57:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.285 00:05:37.285 00:05:37.285 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.285 http://cunit.sourceforge.net/ 00:05:37.285 00:05:37.286 00:05:37.286 Suite: memory 00:05:37.286 Test: alloc and free memory map ...[2024-07-20 16:57:53.181407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.286 passed 00:05:37.286 Test: mem map translation ...[2024-07-20 16:57:53.202404] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.286 [2024-07-20 16:57:53.202427] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.286 [2024-07-20 16:57:53.202483] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.286 [2024-07-20 16:57:53.202496] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.286 passed 00:05:37.286 Test: mem map registration ...[2024-07-20 16:57:53.246900] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.286 [2024-07-20 16:57:53.246922] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.286 passed 00:05:37.286 Test: mem map adjacent registrations ...passed 00:05:37.286 00:05:37.286 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.286 suites 1 1 n/a 0 0 00:05:37.286 tests 4 4 4 0 0 00:05:37.286 asserts 152 152 152 0 n/a 00:05:37.286 00:05:37.286 Elapsed time = 0.152 seconds 00:05:37.286 00:05:37.286 real 0m0.160s 00:05:37.286 user 0m0.152s 00:05:37.286 sys 0m0.007s 00:05:37.286 16:57:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.286 16:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.286 ************************************ 00:05:37.286 END TEST env_memory 00:05:37.286 ************************************ 00:05:37.286 16:57:53 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.286 16:57:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.286 16:57:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.286 16:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:37.286 ************************************ 00:05:37.286 START TEST env_vtophys 00:05:37.286 ************************************ 00:05:37.286 16:57:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.286 EAL: lib.eal log level changed from notice to debug 00:05:37.286 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.286 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.286 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.286 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.286 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.286 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.286 EAL: Detected lcore 6 as core 8 on socket 0 00:05:37.286 EAL: Detected lcore 7 as core 9 on socket 0 00:05:37.286 EAL: Detected lcore 8 as core 10 on socket 0 00:05:37.286 EAL: Detected lcore 9 as core 11 on socket 0 00:05:37.286 EAL: Detected lcore 10 as core 12 on socket 0 00:05:37.286 EAL: Detected lcore 11 as core 13 on socket 0 00:05:37.286 EAL: Detected lcore 12 as core 0 on socket 1 00:05:37.286 EAL: Detected lcore 13 as core 1 on socket 1 00:05:37.286 EAL: Detected lcore 14 as core 2 on socket 1 00:05:37.286 EAL: Detected lcore 15 as core 3 on socket 1 00:05:37.286 EAL: Detected lcore 16 as core 4 on socket 1 00:05:37.286 EAL: Detected lcore 17 as core 5 on socket 1 00:05:37.286 EAL: Detected lcore 18 as core 8 on socket 1 00:05:37.286 EAL: Detected lcore 19 as core 9 on socket 1 00:05:37.286 EAL: Detected lcore 20 as core 10 on socket 1 00:05:37.286 EAL: Detected lcore 21 as core 11 on socket 1 00:05:37.286 EAL: Detected lcore 22 as core 12 on socket 1 00:05:37.286 EAL: Detected lcore 23 as core 13 on socket 1 00:05:37.286 EAL: Detected lcore 24 as core 0 on socket 0 00:05:37.286 EAL: Detected lcore 25 as core 1 on socket 0 00:05:37.286 EAL: Detected lcore 26 as core 2 on socket 0 00:05:37.286 EAL: Detected lcore 27 as core 3 on socket 0 00:05:37.286 EAL: Detected lcore 28 as core 4 on socket 0 00:05:37.286 EAL: Detected lcore 29 as core 5 on socket 0 00:05:37.286 EAL: Detected lcore 30 as core 8 on socket 0 00:05:37.286 EAL: Detected lcore 31 as core 9 on socket 0 00:05:37.286 EAL: Detected lcore 32 as core 10 on socket 0 00:05:37.286 EAL: Detected lcore 33 as core 11 on socket 0 00:05:37.286 EAL: Detected lcore 34 as core 12 on socket 0 00:05:37.286 EAL: Detected lcore 35 as core 13 on socket 0 00:05:37.286 EAL: Detected lcore 36 as core 0 on socket 1 00:05:37.286 EAL: Detected lcore 37 as core 1 on socket 1 00:05:37.286 EAL: Detected lcore 38 as core 2 on socket 1 00:05:37.286 EAL: Detected lcore 39 as core 3 on socket 1 00:05:37.286 EAL: Detected lcore 40 as core 4 on socket 1 00:05:37.286 EAL: Detected lcore 41 as core 5 on socket 1 00:05:37.286 EAL: Detected lcore 42 as core 8 on socket 1 00:05:37.286 EAL: Detected lcore 43 as core 9 on socket 1 00:05:37.286 EAL: Detected lcore 44 as core 10 on socket 1 00:05:37.286 EAL: Detected lcore 45 as core 11 on socket 1 00:05:37.286 EAL: Detected lcore 46 as core 12 on socket 1 00:05:37.286 EAL: Detected lcore 47 as core 13 on socket 1 00:05:37.286 EAL: Maximum logical cores by configuration: 128 00:05:37.286 EAL: Detected CPU lcores: 48 00:05:37.286 EAL: Detected NUMA nodes: 2 00:05:37.286 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:37.286 EAL: Detected shared linkage of DPDK 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:37.286 EAL: Registered [vdev] bus. 00:05:37.286 EAL: bus.vdev log level changed from disabled to notice 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:37.286 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.286 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:37.286 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:37.286 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Bus pci wants IOVA as 'DC' 00:05:37.286 EAL: Bus vdev wants IOVA as 'DC' 00:05:37.286 EAL: Buses did not request a specific IOVA mode. 00:05:37.286 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.286 EAL: Selected IOVA mode 'VA' 00:05:37.286 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.286 EAL: Probing VFIO support... 00:05:37.286 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.286 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.286 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.286 EAL: VFIO support initialized 00:05:37.286 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.286 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.286 EAL: Setting up physically contiguous memory... 00:05:37.286 EAL: Setting maximum number of open files to 524288 00:05:37.286 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.286 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.286 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.286 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.286 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.286 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.286 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.286 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.286 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.286 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.286 EAL: Hugepages will be freed exactly as allocated. 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: TSC frequency is ~2700000 KHz 00:05:37.286 EAL: Main lcore 0 is ready (tid=7f75ecf45a00;cpuset=[0]) 00:05:37.286 EAL: Trying to obtain current memory policy. 00:05:37.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.286 EAL: Restoring previous memory policy: 0 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.286 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.286 00:05:37.286 00:05:37.286 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.286 http://cunit.sourceforge.net/ 00:05:37.286 00:05:37.286 00:05:37.286 Suite: components_suite 00:05:37.286 Test: vtophys_malloc_test ...passed 00:05:37.286 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.286 EAL: Restoring previous memory policy: 4 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.286 EAL: Trying to obtain current memory policy. 00:05:37.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.286 EAL: Restoring previous memory policy: 4 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.286 EAL: Trying to obtain current memory policy. 00:05:37.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.286 EAL: Restoring previous memory policy: 4 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.286 EAL: No shared files mode enabled, IPC is disabled 00:05:37.286 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.286 EAL: Trying to obtain current memory policy. 00:05:37.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.286 EAL: Restoring previous memory policy: 4 00:05:37.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.286 EAL: request: mp_malloc_sync 00:05:37.287 EAL: No shared files mode enabled, IPC is disabled 00:05:37.287 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.287 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.287 EAL: request: mp_malloc_sync 00:05:37.287 EAL: No shared files mode enabled, IPC is disabled 00:05:37.287 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.287 EAL: Trying to obtain current memory policy. 00:05:37.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.287 EAL: Restoring previous memory policy: 4 00:05:37.287 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.287 EAL: request: mp_malloc_sync 00:05:37.287 EAL: No shared files mode enabled, IPC is disabled 00:05:37.287 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.287 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.287 EAL: request: mp_malloc_sync 00:05:37.287 EAL: No shared files mode enabled, IPC is disabled 00:05:37.287 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.287 EAL: Trying to obtain current memory policy. 00:05:37.287 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.544 EAL: Restoring previous memory policy: 4 00:05:37.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.544 EAL: request: mp_malloc_sync 00:05:37.544 EAL: No shared files mode enabled, IPC is disabled 00:05:37.544 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.544 EAL: request: mp_malloc_sync 00:05:37.544 EAL: No shared files mode enabled, IPC is disabled 00:05:37.544 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.544 EAL: Trying to obtain current memory policy. 00:05:37.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.544 EAL: Restoring previous memory policy: 4 00:05:37.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.544 EAL: request: mp_malloc_sync 00:05:37.544 EAL: No shared files mode enabled, IPC is disabled 00:05:37.544 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.544 EAL: request: mp_malloc_sync 00:05:37.544 EAL: No shared files mode enabled, IPC is disabled 00:05:37.544 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.544 EAL: Trying to obtain current memory policy. 00:05:37.544 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.544 EAL: Restoring previous memory policy: 4 00:05:37.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.544 EAL: request: mp_malloc_sync 00:05:37.544 EAL: No shared files mode enabled, IPC is disabled 00:05:37.544 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.544 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.801 EAL: request: mp_malloc_sync 00:05:37.801 EAL: No shared files mode enabled, IPC is disabled 00:05:37.801 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.801 EAL: Trying to obtain current memory policy. 00:05:37.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.801 EAL: Restoring previous memory policy: 4 00:05:37.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.801 EAL: request: mp_malloc_sync 00:05:37.801 EAL: No shared files mode enabled, IPC is disabled 00:05:37.801 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.059 EAL: request: mp_malloc_sync 00:05:38.059 EAL: No shared files mode enabled, IPC is disabled 00:05:38.059 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.059 EAL: Trying to obtain current memory policy. 00:05:38.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.315 EAL: Restoring previous memory policy: 4 00:05:38.315 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.315 EAL: request: mp_malloc_sync 00:05:38.315 EAL: No shared files mode enabled, IPC is disabled 00:05:38.315 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.571 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.829 EAL: request: mp_malloc_sync 00:05:38.829 EAL: No shared files mode enabled, IPC is disabled 00:05:38.829 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.829 passed 00:05:38.829 00:05:38.829 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.829 suites 1 1 n/a 0 0 00:05:38.829 tests 2 2 2 0 0 00:05:38.829 asserts 497 497 497 0 n/a 00:05:38.829 00:05:38.829 Elapsed time = 1.376 seconds 00:05:38.829 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.829 EAL: request: mp_malloc_sync 00:05:38.829 EAL: No shared files mode enabled, IPC is disabled 00:05:38.829 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.829 EAL: No shared files mode enabled, IPC is disabled 00:05:38.829 EAL: No shared files mode enabled, IPC is disabled 00:05:38.829 EAL: No shared files mode enabled, IPC is disabled 00:05:38.829 00:05:38.829 real 0m1.490s 00:05:38.829 user 0m0.853s 00:05:38.829 sys 0m0.605s 00:05:38.829 16:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.829 16:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.829 ************************************ 00:05:38.829 END TEST env_vtophys 00:05:38.829 ************************************ 00:05:38.829 16:57:54 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.829 16:57:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.829 16:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.829 16:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.829 ************************************ 00:05:38.829 START TEST env_pci 00:05:38.829 ************************************ 00:05:38.829 16:57:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.829 00:05:38.829 00:05:38.829 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.829 http://cunit.sourceforge.net/ 00:05:38.829 00:05:38.829 00:05:38.829 Suite: pci 00:05:38.829 Test: pci_hook ...[2024-07-20 16:57:54.852725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 415369 has claimed it 00:05:38.829 EAL: Cannot find device (10000:00:01.0) 00:05:38.829 EAL: Failed to attach device on primary process 00:05:38.829 passed 00:05:38.829 00:05:38.829 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.829 suites 1 1 n/a 0 0 00:05:38.830 tests 1 1 1 0 0 00:05:38.830 asserts 25 25 25 0 n/a 00:05:38.830 00:05:38.830 Elapsed time = 0.021 seconds 00:05:38.830 00:05:38.830 real 0m0.033s 00:05:38.830 user 0m0.008s 00:05:38.830 sys 0m0.025s 00:05:38.830 16:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.830 16:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.830 ************************************ 00:05:38.830 END TEST env_pci 00:05:38.830 ************************************ 00:05:38.830 16:57:54 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.830 16:57:54 -- env/env.sh@15 -- # uname 00:05:38.830 16:57:54 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.830 16:57:54 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.830 16:57:54 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.830 16:57:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:38.830 16:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.830 16:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:38.830 ************************************ 00:05:38.830 START TEST env_dpdk_post_init 00:05:38.830 ************************************ 00:05:38.830 16:57:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.830 EAL: Detected CPU lcores: 48 00:05:38.830 EAL: Detected NUMA nodes: 2 00:05:38.830 EAL: Detected shared linkage of DPDK 00:05:38.830 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.830 EAL: Selected IOVA mode 'VA' 00:05:38.830 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.830 EAL: VFIO support initialized 00:05:38.830 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.087 EAL: Using IOMMU type 1 (Type 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:39.087 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:40.020 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:43.296 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:43.296 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:43.296 Starting DPDK initialization... 00:05:43.296 Starting SPDK post initialization... 00:05:43.296 SPDK NVMe probe 00:05:43.296 Attaching to 0000:88:00.0 00:05:43.296 Attached to 0000:88:00.0 00:05:43.296 Cleaning up... 00:05:43.296 00:05:43.296 real 0m4.381s 00:05:43.296 user 0m3.254s 00:05:43.296 sys 0m0.184s 00:05:43.296 16:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.296 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.296 ************************************ 00:05:43.296 END TEST env_dpdk_post_init 00:05:43.296 ************************************ 00:05:43.296 16:57:59 -- env/env.sh@26 -- # uname 00:05:43.296 16:57:59 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.296 16:57:59 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.296 16:57:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.296 16:57:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.296 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.296 ************************************ 00:05:43.296 START TEST env_mem_callbacks 00:05:43.296 ************************************ 00:05:43.297 16:57:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.297 EAL: Detected CPU lcores: 48 00:05:43.297 EAL: Detected NUMA nodes: 2 00:05:43.297 EAL: Detected shared linkage of DPDK 00:05:43.297 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.297 EAL: Selected IOVA mode 'VA' 00:05:43.297 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.297 EAL: VFIO support initialized 00:05:43.297 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.297 00:05:43.297 00:05:43.297 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.297 http://cunit.sourceforge.net/ 00:05:43.297 00:05:43.297 00:05:43.297 Suite: memory 00:05:43.297 Test: test ... 00:05:43.297 register 0x200000200000 2097152 00:05:43.297 malloc 3145728 00:05:43.297 register 0x200000400000 4194304 00:05:43.297 buf 0x200000500000 len 3145728 PASSED 00:05:43.297 malloc 64 00:05:43.297 buf 0x2000004fff40 len 64 PASSED 00:05:43.297 malloc 4194304 00:05:43.297 register 0x200000800000 6291456 00:05:43.297 buf 0x200000a00000 len 4194304 PASSED 00:05:43.297 free 0x200000500000 3145728 00:05:43.297 free 0x2000004fff40 64 00:05:43.297 unregister 0x200000400000 4194304 PASSED 00:05:43.297 free 0x200000a00000 4194304 00:05:43.297 unregister 0x200000800000 6291456 PASSED 00:05:43.297 malloc 8388608 00:05:43.297 register 0x200000400000 10485760 00:05:43.297 buf 0x200000600000 len 8388608 PASSED 00:05:43.297 free 0x200000600000 8388608 00:05:43.297 unregister 0x200000400000 10485760 PASSED 00:05:43.297 passed 00:05:43.297 00:05:43.297 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.297 suites 1 1 n/a 0 0 00:05:43.297 tests 1 1 1 0 0 00:05:43.297 asserts 15 15 15 0 n/a 00:05:43.297 00:05:43.297 Elapsed time = 0.005 seconds 00:05:43.297 00:05:43.297 real 0m0.049s 00:05:43.297 user 0m0.018s 00:05:43.297 sys 0m0.031s 00:05:43.297 16:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.297 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.297 ************************************ 00:05:43.297 END TEST env_mem_callbacks 00:05:43.297 ************************************ 00:05:43.297 00:05:43.297 real 0m6.288s 00:05:43.297 user 0m4.360s 00:05:43.297 sys 0m0.977s 00:05:43.297 16:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.297 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.297 ************************************ 00:05:43.297 END TEST env 00:05:43.297 ************************************ 00:05:43.297 16:57:59 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:43.297 16:57:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.297 16:57:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.297 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.297 ************************************ 00:05:43.297 START TEST rpc 00:05:43.297 ************************************ 00:05:43.297 16:57:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:43.297 * Looking for test storage... 00:05:43.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.297 16:57:59 -- rpc/rpc.sh@65 -- # spdk_pid=416037 00:05:43.297 16:57:59 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:43.297 16:57:59 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.297 16:57:59 -- rpc/rpc.sh@67 -- # waitforlisten 416037 00:05:43.297 16:57:59 -- common/autotest_common.sh@819 -- # '[' -z 416037 ']' 00:05:43.297 16:57:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.297 16:57:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.297 16:57:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.297 16:57:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.297 16:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.555 [2024-07-20 16:57:59.499583] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.555 [2024-07-20 16:57:59.499663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416037 ] 00:05:43.555 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.555 [2024-07-20 16:57:59.556493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.555 [2024-07-20 16:57:59.638445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.555 [2024-07-20 16:57:59.638602] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.555 [2024-07-20 16:57:59.638618] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 416037' to capture a snapshot of events at runtime. 00:05:43.555 [2024-07-20 16:57:59.638630] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid416037 for offline analysis/debug. 00:05:43.555 [2024-07-20 16:57:59.638655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.513 16:58:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.513 16:58:00 -- common/autotest_common.sh@852 -- # return 0 00:05:44.513 16:58:00 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.513 16:58:00 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.513 16:58:00 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:44.513 16:58:00 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:44.513 16:58:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.513 16:58:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 ************************************ 00:05:44.513 START TEST rpc_integrity 00:05:44.513 ************************************ 00:05:44.513 16:58:00 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:44.513 16:58:00 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.513 16:58:00 -- rpc/rpc.sh@13 -- # jq length 00:05:44.513 16:58:00 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.513 16:58:00 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:44.513 16:58:00 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.513 { 00:05:44.513 "name": "Malloc0", 00:05:44.513 "aliases": [ 00:05:44.513 "f60df2c9-8cb7-447a-b2c2-6ef091601e32" 00:05:44.513 ], 00:05:44.513 "product_name": "Malloc disk", 00:05:44.513 "block_size": 512, 00:05:44.513 "num_blocks": 16384, 00:05:44.513 "uuid": "f60df2c9-8cb7-447a-b2c2-6ef091601e32", 00:05:44.513 "assigned_rate_limits": { 00:05:44.513 "rw_ios_per_sec": 0, 00:05:44.513 "rw_mbytes_per_sec": 0, 00:05:44.513 "r_mbytes_per_sec": 0, 00:05:44.513 "w_mbytes_per_sec": 0 00:05:44.513 }, 00:05:44.513 "claimed": false, 00:05:44.513 "zoned": false, 00:05:44.513 "supported_io_types": { 00:05:44.513 "read": true, 00:05:44.513 "write": true, 00:05:44.513 "unmap": true, 00:05:44.513 "write_zeroes": true, 00:05:44.513 "flush": true, 00:05:44.513 "reset": true, 00:05:44.513 "compare": false, 00:05:44.513 "compare_and_write": false, 00:05:44.513 "abort": true, 00:05:44.513 "nvme_admin": false, 00:05:44.513 "nvme_io": false 00:05:44.513 }, 00:05:44.513 "memory_domains": [ 00:05:44.513 { 00:05:44.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.513 "dma_device_type": 2 00:05:44.513 } 00:05:44.513 ], 00:05:44.513 "driver_specific": {} 00:05:44.513 } 00:05:44.513 ]' 00:05:44.513 16:58:00 -- rpc/rpc.sh@17 -- # jq length 00:05:44.513 16:58:00 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.513 16:58:00 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 [2024-07-20 16:58:00.538865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.513 [2024-07-20 16:58:00.538929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.513 [2024-07-20 16:58:00.538955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5823b0 00:05:44.513 [2024-07-20 16:58:00.538971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.513 [2024-07-20 16:58:00.540407] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.513 [2024-07-20 16:58:00.540436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.513 Passthru0 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.513 { 00:05:44.513 "name": "Malloc0", 00:05:44.513 "aliases": [ 00:05:44.513 "f60df2c9-8cb7-447a-b2c2-6ef091601e32" 00:05:44.513 ], 00:05:44.513 "product_name": "Malloc disk", 00:05:44.513 "block_size": 512, 00:05:44.513 "num_blocks": 16384, 00:05:44.513 "uuid": "f60df2c9-8cb7-447a-b2c2-6ef091601e32", 00:05:44.513 "assigned_rate_limits": { 00:05:44.513 "rw_ios_per_sec": 0, 00:05:44.513 "rw_mbytes_per_sec": 0, 00:05:44.513 "r_mbytes_per_sec": 0, 00:05:44.513 "w_mbytes_per_sec": 0 00:05:44.513 }, 00:05:44.513 "claimed": true, 00:05:44.513 "claim_type": "exclusive_write", 00:05:44.513 "zoned": false, 00:05:44.513 "supported_io_types": { 00:05:44.513 "read": true, 00:05:44.513 "write": true, 00:05:44.513 "unmap": true, 00:05:44.513 "write_zeroes": true, 00:05:44.513 "flush": true, 00:05:44.513 "reset": true, 00:05:44.513 "compare": false, 00:05:44.513 "compare_and_write": false, 00:05:44.513 "abort": true, 00:05:44.513 "nvme_admin": false, 00:05:44.513 "nvme_io": false 00:05:44.513 }, 00:05:44.513 "memory_domains": [ 00:05:44.513 { 00:05:44.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.513 "dma_device_type": 2 00:05:44.513 } 00:05:44.513 ], 00:05:44.513 "driver_specific": {} 00:05:44.513 }, 00:05:44.513 { 00:05:44.513 "name": "Passthru0", 00:05:44.513 "aliases": [ 00:05:44.513 "198244f5-280f-5750-85b8-b7b69958e827" 00:05:44.513 ], 00:05:44.513 "product_name": "passthru", 00:05:44.513 "block_size": 512, 00:05:44.513 "num_blocks": 16384, 00:05:44.513 "uuid": "198244f5-280f-5750-85b8-b7b69958e827", 00:05:44.513 "assigned_rate_limits": { 00:05:44.513 "rw_ios_per_sec": 0, 00:05:44.513 "rw_mbytes_per_sec": 0, 00:05:44.513 "r_mbytes_per_sec": 0, 00:05:44.513 "w_mbytes_per_sec": 0 00:05:44.513 }, 00:05:44.513 "claimed": false, 00:05:44.513 "zoned": false, 00:05:44.513 "supported_io_types": { 00:05:44.513 "read": true, 00:05:44.513 "write": true, 00:05:44.513 "unmap": true, 00:05:44.513 "write_zeroes": true, 00:05:44.513 "flush": true, 00:05:44.513 "reset": true, 00:05:44.513 "compare": false, 00:05:44.513 "compare_and_write": false, 00:05:44.513 "abort": true, 00:05:44.513 "nvme_admin": false, 00:05:44.513 "nvme_io": false 00:05:44.513 }, 00:05:44.513 "memory_domains": [ 00:05:44.513 { 00:05:44.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.513 "dma_device_type": 2 00:05:44.513 } 00:05:44.513 ], 00:05:44.513 "driver_specific": { 00:05:44.513 "passthru": { 00:05:44.513 "name": "Passthru0", 00:05:44.513 "base_bdev_name": "Malloc0" 00:05:44.513 } 00:05:44.513 } 00:05:44.513 } 00:05:44.513 ]' 00:05:44.513 16:58:00 -- rpc/rpc.sh@21 -- # jq length 00:05:44.513 16:58:00 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.513 16:58:00 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.513 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.513 16:58:00 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.513 16:58:00 -- rpc/rpc.sh@26 -- # jq length 00:05:44.513 16:58:00 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.513 00:05:44.513 real 0m0.223s 00:05:44.513 user 0m0.145s 00:05:44.513 sys 0m0.021s 00:05:44.513 16:58:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.513 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.513 ************************************ 00:05:44.513 END TEST rpc_integrity 00:05:44.513 ************************************ 00:05:44.771 16:58:00 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:44.771 16:58:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.771 16:58:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.771 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.771 ************************************ 00:05:44.771 START TEST rpc_plugins 00:05:44.771 ************************************ 00:05:44.771 16:58:00 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:44.771 16:58:00 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:44.771 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.771 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.771 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.771 16:58:00 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.771 16:58:00 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.771 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.771 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.771 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.771 16:58:00 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:44.771 { 00:05:44.771 "name": "Malloc1", 00:05:44.771 "aliases": [ 00:05:44.771 "4b66761c-cc52-443f-be01-dbca44ab9fcb" 00:05:44.771 ], 00:05:44.771 "product_name": "Malloc disk", 00:05:44.771 "block_size": 4096, 00:05:44.771 "num_blocks": 256, 00:05:44.771 "uuid": "4b66761c-cc52-443f-be01-dbca44ab9fcb", 00:05:44.771 "assigned_rate_limits": { 00:05:44.771 "rw_ios_per_sec": 0, 00:05:44.771 "rw_mbytes_per_sec": 0, 00:05:44.771 "r_mbytes_per_sec": 0, 00:05:44.771 "w_mbytes_per_sec": 0 00:05:44.771 }, 00:05:44.771 "claimed": false, 00:05:44.771 "zoned": false, 00:05:44.771 "supported_io_types": { 00:05:44.771 "read": true, 00:05:44.771 "write": true, 00:05:44.771 "unmap": true, 00:05:44.771 "write_zeroes": true, 00:05:44.771 "flush": true, 00:05:44.771 "reset": true, 00:05:44.771 "compare": false, 00:05:44.771 "compare_and_write": false, 00:05:44.772 "abort": true, 00:05:44.772 "nvme_admin": false, 00:05:44.772 "nvme_io": false 00:05:44.772 }, 00:05:44.772 "memory_domains": [ 00:05:44.772 { 00:05:44.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.772 "dma_device_type": 2 00:05:44.772 } 00:05:44.772 ], 00:05:44.772 "driver_specific": {} 00:05:44.772 } 00:05:44.772 ]' 00:05:44.772 16:58:00 -- rpc/rpc.sh@32 -- # jq length 00:05:44.772 16:58:00 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:44.772 16:58:00 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:44.772 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.772 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.772 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.772 16:58:00 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:44.772 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.772 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.772 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.772 16:58:00 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:44.772 16:58:00 -- rpc/rpc.sh@36 -- # jq length 00:05:44.772 16:58:00 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:44.772 00:05:44.772 real 0m0.108s 00:05:44.772 user 0m0.070s 00:05:44.772 sys 0m0.010s 00:05:44.772 16:58:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.772 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.772 ************************************ 00:05:44.772 END TEST rpc_plugins 00:05:44.772 ************************************ 00:05:44.772 16:58:00 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:44.772 16:58:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.772 16:58:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.772 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.772 ************************************ 00:05:44.772 START TEST rpc_trace_cmd_test 00:05:44.772 ************************************ 00:05:44.772 16:58:00 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:44.772 16:58:00 -- rpc/rpc.sh@40 -- # local info 00:05:44.772 16:58:00 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:44.772 16:58:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.772 16:58:00 -- common/autotest_common.sh@10 -- # set +x 00:05:44.772 16:58:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.772 16:58:00 -- rpc/rpc.sh@42 -- # info='{ 00:05:44.772 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid416037", 00:05:44.772 "tpoint_group_mask": "0x8", 00:05:44.772 "iscsi_conn": { 00:05:44.772 "mask": "0x2", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "scsi": { 00:05:44.772 "mask": "0x4", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "bdev": { 00:05:44.772 "mask": "0x8", 00:05:44.772 "tpoint_mask": "0xffffffffffffffff" 00:05:44.772 }, 00:05:44.772 "nvmf_rdma": { 00:05:44.772 "mask": "0x10", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "nvmf_tcp": { 00:05:44.772 "mask": "0x20", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "ftl": { 00:05:44.772 "mask": "0x40", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "blobfs": { 00:05:44.772 "mask": "0x80", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "dsa": { 00:05:44.772 "mask": "0x200", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "thread": { 00:05:44.772 "mask": "0x400", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "nvme_pcie": { 00:05:44.772 "mask": "0x800", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "iaa": { 00:05:44.772 "mask": "0x1000", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "nvme_tcp": { 00:05:44.772 "mask": "0x2000", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 }, 00:05:44.772 "bdev_nvme": { 00:05:44.772 "mask": "0x4000", 00:05:44.772 "tpoint_mask": "0x0" 00:05:44.772 } 00:05:44.772 }' 00:05:44.772 16:58:00 -- rpc/rpc.sh@43 -- # jq length 00:05:44.772 16:58:00 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:44.772 16:58:00 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:44.772 16:58:00 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:44.772 16:58:00 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.030 16:58:00 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.030 16:58:00 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.030 16:58:00 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.030 16:58:00 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.030 16:58:01 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.030 00:05:45.030 real 0m0.195s 00:05:45.030 user 0m0.174s 00:05:45.030 sys 0m0.014s 00:05:45.030 16:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 ************************************ 00:05:45.030 END TEST rpc_trace_cmd_test 00:05:45.030 ************************************ 00:05:45.030 16:58:01 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.030 16:58:01 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.030 16:58:01 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.030 16:58:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.030 16:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 ************************************ 00:05:45.030 START TEST rpc_daemon_integrity 00:05:45.030 ************************************ 00:05:45.030 16:58:01 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:45.030 16:58:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.030 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.030 16:58:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.030 16:58:01 -- rpc/rpc.sh@13 -- # jq length 00:05:45.030 16:58:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.030 16:58:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.030 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.030 16:58:01 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:45.030 16:58:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.030 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.030 16:58:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.030 { 00:05:45.030 "name": "Malloc2", 00:05:45.030 "aliases": [ 00:05:45.030 "3eca806a-b60c-4fce-998e-183297fabaf0" 00:05:45.030 ], 00:05:45.030 "product_name": "Malloc disk", 00:05:45.030 "block_size": 512, 00:05:45.030 "num_blocks": 16384, 00:05:45.030 "uuid": "3eca806a-b60c-4fce-998e-183297fabaf0", 00:05:45.030 "assigned_rate_limits": { 00:05:45.030 "rw_ios_per_sec": 0, 00:05:45.030 "rw_mbytes_per_sec": 0, 00:05:45.030 "r_mbytes_per_sec": 0, 00:05:45.030 "w_mbytes_per_sec": 0 00:05:45.030 }, 00:05:45.030 "claimed": false, 00:05:45.030 "zoned": false, 00:05:45.030 "supported_io_types": { 00:05:45.030 "read": true, 00:05:45.030 "write": true, 00:05:45.030 "unmap": true, 00:05:45.030 "write_zeroes": true, 00:05:45.030 "flush": true, 00:05:45.030 "reset": true, 00:05:45.030 "compare": false, 00:05:45.030 "compare_and_write": false, 00:05:45.030 "abort": true, 00:05:45.030 "nvme_admin": false, 00:05:45.030 "nvme_io": false 00:05:45.030 }, 00:05:45.030 "memory_domains": [ 00:05:45.030 { 00:05:45.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.030 "dma_device_type": 2 00:05:45.030 } 00:05:45.030 ], 00:05:45.030 "driver_specific": {} 00:05:45.030 } 00:05:45.030 ]' 00:05:45.030 16:58:01 -- rpc/rpc.sh@17 -- # jq length 00:05:45.030 16:58:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.030 16:58:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:45.030 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 [2024-07-20 16:58:01.152659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:45.030 [2024-07-20 16:58:01.152705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.030 [2024-07-20 16:58:01.152733] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x582020 00:05:45.030 [2024-07-20 16:58:01.152749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.030 [2024-07-20 16:58:01.154095] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.030 [2024-07-20 16:58:01.154123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.030 Passthru0 00:05:45.030 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.030 16:58:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.030 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.030 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.030 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.030 16:58:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.030 { 00:05:45.030 "name": "Malloc2", 00:05:45.030 "aliases": [ 00:05:45.030 "3eca806a-b60c-4fce-998e-183297fabaf0" 00:05:45.030 ], 00:05:45.030 "product_name": "Malloc disk", 00:05:45.030 "block_size": 512, 00:05:45.030 "num_blocks": 16384, 00:05:45.030 "uuid": "3eca806a-b60c-4fce-998e-183297fabaf0", 00:05:45.030 "assigned_rate_limits": { 00:05:45.030 "rw_ios_per_sec": 0, 00:05:45.030 "rw_mbytes_per_sec": 0, 00:05:45.030 "r_mbytes_per_sec": 0, 00:05:45.030 "w_mbytes_per_sec": 0 00:05:45.030 }, 00:05:45.030 "claimed": true, 00:05:45.030 "claim_type": "exclusive_write", 00:05:45.030 "zoned": false, 00:05:45.030 "supported_io_types": { 00:05:45.030 "read": true, 00:05:45.030 "write": true, 00:05:45.030 "unmap": true, 00:05:45.030 "write_zeroes": true, 00:05:45.030 "flush": true, 00:05:45.030 "reset": true, 00:05:45.031 "compare": false, 00:05:45.031 "compare_and_write": false, 00:05:45.031 "abort": true, 00:05:45.031 "nvme_admin": false, 00:05:45.031 "nvme_io": false 00:05:45.031 }, 00:05:45.031 "memory_domains": [ 00:05:45.031 { 00:05:45.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.031 "dma_device_type": 2 00:05:45.031 } 00:05:45.031 ], 00:05:45.031 "driver_specific": {} 00:05:45.031 }, 00:05:45.031 { 00:05:45.031 "name": "Passthru0", 00:05:45.031 "aliases": [ 00:05:45.031 "9cb30e6f-10ee-5f9f-b6f0-8449ccec67eb" 00:05:45.031 ], 00:05:45.031 "product_name": "passthru", 00:05:45.031 "block_size": 512, 00:05:45.031 "num_blocks": 16384, 00:05:45.031 "uuid": "9cb30e6f-10ee-5f9f-b6f0-8449ccec67eb", 00:05:45.031 "assigned_rate_limits": { 00:05:45.031 "rw_ios_per_sec": 0, 00:05:45.031 "rw_mbytes_per_sec": 0, 00:05:45.031 "r_mbytes_per_sec": 0, 00:05:45.031 "w_mbytes_per_sec": 0 00:05:45.031 }, 00:05:45.031 "claimed": false, 00:05:45.031 "zoned": false, 00:05:45.031 "supported_io_types": { 00:05:45.031 "read": true, 00:05:45.031 "write": true, 00:05:45.031 "unmap": true, 00:05:45.031 "write_zeroes": true, 00:05:45.031 "flush": true, 00:05:45.031 "reset": true, 00:05:45.031 "compare": false, 00:05:45.031 "compare_and_write": false, 00:05:45.031 "abort": true, 00:05:45.031 "nvme_admin": false, 00:05:45.031 "nvme_io": false 00:05:45.031 }, 00:05:45.031 "memory_domains": [ 00:05:45.031 { 00:05:45.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.031 "dma_device_type": 2 00:05:45.031 } 00:05:45.031 ], 00:05:45.031 "driver_specific": { 00:05:45.031 "passthru": { 00:05:45.031 "name": "Passthru0", 00:05:45.031 "base_bdev_name": "Malloc2" 00:05:45.031 } 00:05:45.031 } 00:05:45.031 } 00:05:45.031 ]' 00:05:45.031 16:58:01 -- rpc/rpc.sh@21 -- # jq length 00:05:45.289 16:58:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.289 16:58:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.289 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.289 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.289 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.289 16:58:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.289 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.289 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.289 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.289 16:58:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.289 16:58:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.289 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.289 16:58:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.289 16:58:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.289 16:58:01 -- rpc/rpc.sh@26 -- # jq length 00:05:45.289 16:58:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.289 00:05:45.289 real 0m0.226s 00:05:45.289 user 0m0.151s 00:05:45.289 sys 0m0.020s 00:05:45.289 16:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.289 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.289 ************************************ 00:05:45.289 END TEST rpc_daemon_integrity 00:05:45.289 ************************************ 00:05:45.289 16:58:01 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.289 16:58:01 -- rpc/rpc.sh@84 -- # killprocess 416037 00:05:45.289 16:58:01 -- common/autotest_common.sh@926 -- # '[' -z 416037 ']' 00:05:45.289 16:58:01 -- common/autotest_common.sh@930 -- # kill -0 416037 00:05:45.289 16:58:01 -- common/autotest_common.sh@931 -- # uname 00:05:45.289 16:58:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.289 16:58:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 416037 00:05:45.289 16:58:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.289 16:58:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.289 16:58:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 416037' 00:05:45.289 killing process with pid 416037 00:05:45.289 16:58:01 -- common/autotest_common.sh@945 -- # kill 416037 00:05:45.289 16:58:01 -- common/autotest_common.sh@950 -- # wait 416037 00:05:45.855 00:05:45.855 real 0m2.309s 00:05:45.855 user 0m2.954s 00:05:45.855 sys 0m0.559s 00:05:45.855 16:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.855 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 ************************************ 00:05:45.855 END TEST rpc 00:05:45.855 ************************************ 00:05:45.855 16:58:01 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.855 16:58:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.855 16:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.855 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 ************************************ 00:05:45.855 START TEST rpc_client 00:05:45.855 ************************************ 00:05:45.855 16:58:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.855 * Looking for test storage... 00:05:45.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:45.855 16:58:01 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:45.855 OK 00:05:45.855 16:58:01 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.855 00:05:45.855 real 0m0.063s 00:05:45.855 user 0m0.033s 00:05:45.855 sys 0m0.035s 00:05:45.855 16:58:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.855 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 ************************************ 00:05:45.855 END TEST rpc_client 00:05:45.855 ************************************ 00:05:45.855 16:58:01 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.855 16:58:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.855 16:58:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.855 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.855 ************************************ 00:05:45.855 START TEST json_config 00:05:45.855 ************************************ 00:05:45.855 16:58:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.855 16:58:01 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.855 16:58:01 -- nvmf/common.sh@7 -- # uname -s 00:05:45.855 16:58:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.855 16:58:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.855 16:58:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.855 16:58:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.855 16:58:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.855 16:58:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.855 16:58:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.855 16:58:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.855 16:58:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.855 16:58:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.855 16:58:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.855 16:58:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.855 16:58:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.855 16:58:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.855 16:58:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.855 16:58:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.855 16:58:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.855 16:58:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.855 16:58:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.855 16:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.855 16:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.855 16:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.855 16:58:01 -- paths/export.sh@5 -- # export PATH 00:05:45.855 16:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.855 16:58:01 -- nvmf/common.sh@46 -- # : 0 00:05:45.855 16:58:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:45.855 16:58:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:45.856 16:58:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:45.856 16:58:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.856 16:58:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.856 16:58:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:45.856 16:58:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:45.856 16:58:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:45.856 16:58:01 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.856 16:58:01 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.856 16:58:01 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:45.856 16:58:01 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.856 16:58:01 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:45.856 16:58:01 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.856 16:58:01 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:45.856 16:58:01 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:45.856 16:58:01 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:45.856 16:58:01 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:45.856 16:58:01 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.856 16:58:01 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:45.856 INFO: JSON configuration test init 00:05:45.856 16:58:01 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:45.856 16:58:01 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:45.856 16:58:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.856 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 16:58:01 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:45.856 16:58:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.856 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 16:58:01 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.856 16:58:01 -- json_config/json_config.sh@98 -- # local app=target 00:05:45.856 16:58:01 -- json_config/json_config.sh@99 -- # shift 00:05:45.856 16:58:01 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:45.856 16:58:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.856 16:58:01 -- json_config/json_config.sh@111 -- # app_pid[$app]=416512 00:05:45.856 16:58:01 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.856 16:58:01 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:45.856 Waiting for target to run... 00:05:45.856 16:58:01 -- json_config/json_config.sh@114 -- # waitforlisten 416512 /var/tmp/spdk_tgt.sock 00:05:45.856 16:58:01 -- common/autotest_common.sh@819 -- # '[' -z 416512 ']' 00:05:45.856 16:58:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.856 16:58:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.856 16:58:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.856 16:58:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.856 16:58:01 -- common/autotest_common.sh@10 -- # set +x 00:05:45.856 [2024-07-20 16:58:01.928048] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:45.856 [2024-07-20 16:58:01.928149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid416512 ] 00:05:45.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.114 [2024-07-20 16:58:02.263270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.372 [2024-07-20 16:58:02.324724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.372 [2024-07-20 16:58:02.324917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.938 16:58:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.938 16:58:02 -- common/autotest_common.sh@852 -- # return 0 00:05:46.938 16:58:02 -- json_config/json_config.sh@115 -- # echo '' 00:05:46.938 00:05:46.938 16:58:02 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:46.938 16:58:02 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:46.938 16:58:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:46.938 16:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.938 16:58:02 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:46.938 16:58:02 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:46.938 16:58:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.938 16:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:46.938 16:58:02 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:46.938 16:58:02 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:46.938 16:58:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:50.220 16:58:06 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:50.220 16:58:06 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:50.220 16:58:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.220 16:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:50.220 16:58:06 -- json_config/json_config.sh@48 -- # local ret=0 00:05:50.220 16:58:06 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:50.220 16:58:06 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:50.220 16:58:06 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:50.220 16:58:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:50.220 16:58:06 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:50.220 16:58:06 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:50.220 16:58:06 -- json_config/json_config.sh@51 -- # local get_types 00:05:50.220 16:58:06 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:50.220 16:58:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:50.220 16:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:50.220 16:58:06 -- json_config/json_config.sh@58 -- # return 0 00:05:50.220 16:58:06 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:50.220 16:58:06 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:50.220 16:58:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.220 16:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:50.220 16:58:06 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:50.220 16:58:06 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:50.220 16:58:06 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.220 16:58:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.477 MallocForNvmf0 00:05:50.477 16:58:06 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.477 16:58:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.734 MallocForNvmf1 00:05:50.734 16:58:06 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.734 16:58:06 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.991 [2024-07-20 16:58:06.993240] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.991 16:58:07 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:50.991 16:58:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.248 16:58:07 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.248 16:58:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.505 16:58:07 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.505 16:58:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.762 16:58:07 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.762 16:58:07 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.019 [2024-07-20 16:58:07.952399] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.019 16:58:07 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:52.019 16:58:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.019 16:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.019 16:58:07 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:52.019 16:58:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.019 16:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:52.019 16:58:08 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:52.019 16:58:08 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.019 16:58:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.276 MallocBdevForConfigChangeCheck 00:05:52.276 16:58:08 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:52.276 16:58:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.276 16:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:52.276 16:58:08 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:52.276 16:58:08 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.532 16:58:08 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:52.532 INFO: shutting down applications... 00:05:52.532 16:58:08 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:52.532 16:58:08 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:52.532 16:58:08 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:52.532 16:58:08 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:54.430 Calling clear_iscsi_subsystem 00:05:54.430 Calling clear_nvmf_subsystem 00:05:54.430 Calling clear_nbd_subsystem 00:05:54.430 Calling clear_ublk_subsystem 00:05:54.430 Calling clear_vhost_blk_subsystem 00:05:54.430 Calling clear_vhost_scsi_subsystem 00:05:54.430 Calling clear_scheduler_subsystem 00:05:54.430 Calling clear_bdev_subsystem 00:05:54.430 Calling clear_accel_subsystem 00:05:54.430 Calling clear_vmd_subsystem 00:05:54.430 Calling clear_sock_subsystem 00:05:54.430 Calling clear_iobuf_subsystem 00:05:54.430 16:58:10 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:54.430 16:58:10 -- json_config/json_config.sh@396 -- # count=100 00:05:54.430 16:58:10 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:54.430 16:58:10 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.430 16:58:10 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:54.430 16:58:10 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:54.687 16:58:10 -- json_config/json_config.sh@398 -- # break 00:05:54.687 16:58:10 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:54.687 16:58:10 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:54.687 16:58:10 -- json_config/json_config.sh@120 -- # local app=target 00:05:54.687 16:58:10 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:54.687 16:58:10 -- json_config/json_config.sh@124 -- # [[ -n 416512 ]] 00:05:54.688 16:58:10 -- json_config/json_config.sh@127 -- # kill -SIGINT 416512 00:05:54.688 16:58:10 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:54.688 16:58:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:54.688 16:58:10 -- json_config/json_config.sh@130 -- # kill -0 416512 00:05:54.688 16:58:10 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:55.252 16:58:11 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:55.252 16:58:11 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:55.252 16:58:11 -- json_config/json_config.sh@130 -- # kill -0 416512 00:05:55.252 16:58:11 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:55.252 16:58:11 -- json_config/json_config.sh@132 -- # break 00:05:55.252 16:58:11 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:55.252 16:58:11 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:55.252 SPDK target shutdown done 00:05:55.252 16:58:11 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:55.252 INFO: relaunching applications... 00:05:55.252 16:58:11 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.252 16:58:11 -- json_config/json_config.sh@98 -- # local app=target 00:05:55.252 16:58:11 -- json_config/json_config.sh@99 -- # shift 00:05:55.252 16:58:11 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:55.252 16:58:11 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:55.252 16:58:11 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:55.252 16:58:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:55.252 16:58:11 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:55.252 16:58:11 -- json_config/json_config.sh@111 -- # app_pid[$app]=417731 00:05:55.252 16:58:11 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.252 16:58:11 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:55.252 Waiting for target to run... 00:05:55.252 16:58:11 -- json_config/json_config.sh@114 -- # waitforlisten 417731 /var/tmp/spdk_tgt.sock 00:05:55.252 16:58:11 -- common/autotest_common.sh@819 -- # '[' -z 417731 ']' 00:05:55.252 16:58:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.252 16:58:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.252 16:58:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.252 16:58:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.252 16:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:55.252 [2024-07-20 16:58:11.208841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:55.252 [2024-07-20 16:58:11.208941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417731 ] 00:05:55.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.817 [2024-07-20 16:58:11.728723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.817 [2024-07-20 16:58:11.808322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.817 [2024-07-20 16:58:11.808532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.096 [2024-07-20 16:58:14.830783] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.096 [2024-07-20 16:58:14.863257] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:59.096 16:58:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.096 16:58:15 -- common/autotest_common.sh@852 -- # return 0 00:05:59.096 16:58:15 -- json_config/json_config.sh@115 -- # echo '' 00:05:59.096 00:05:59.096 16:58:15 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:59.096 16:58:15 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:59.096 INFO: Checking if target configuration is the same... 00:05:59.096 16:58:15 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.096 16:58:15 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:59.096 16:58:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.096 + '[' 2 -ne 2 ']' 00:05:59.096 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.096 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.096 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.096 +++ basename /dev/fd/62 00:05:59.096 ++ mktemp /tmp/62.XXX 00:05:59.096 + tmp_file_1=/tmp/62.tY7 00:05:59.096 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.096 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.096 + tmp_file_2=/tmp/spdk_tgt_config.json.hCy 00:05:59.096 + ret=0 00:05:59.096 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.380 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.380 + diff -u /tmp/62.tY7 /tmp/spdk_tgt_config.json.hCy 00:05:59.380 + echo 'INFO: JSON config files are the same' 00:05:59.380 INFO: JSON config files are the same 00:05:59.380 + rm /tmp/62.tY7 /tmp/spdk_tgt_config.json.hCy 00:05:59.380 + exit 0 00:05:59.380 16:58:15 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:59.380 16:58:15 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:59.380 INFO: changing configuration and checking if this can be detected... 00:05:59.380 16:58:15 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.380 16:58:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.645 16:58:15 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.645 16:58:15 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:59.645 16:58:15 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.645 + '[' 2 -ne 2 ']' 00:05:59.645 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.645 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.645 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.645 +++ basename /dev/fd/62 00:05:59.645 ++ mktemp /tmp/62.XXX 00:05:59.645 + tmp_file_1=/tmp/62.nkK 00:05:59.645 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.645 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.645 + tmp_file_2=/tmp/spdk_tgt_config.json.z1B 00:05:59.645 + ret=0 00:05:59.645 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.918 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.176 + diff -u /tmp/62.nkK /tmp/spdk_tgt_config.json.z1B 00:06:00.176 + ret=1 00:06:00.176 + echo '=== Start of file: /tmp/62.nkK ===' 00:06:00.176 + cat /tmp/62.nkK 00:06:00.176 + echo '=== End of file: /tmp/62.nkK ===' 00:06:00.176 + echo '' 00:06:00.176 + echo '=== Start of file: /tmp/spdk_tgt_config.json.z1B ===' 00:06:00.176 + cat /tmp/spdk_tgt_config.json.z1B 00:06:00.176 + echo '=== End of file: /tmp/spdk_tgt_config.json.z1B ===' 00:06:00.176 + echo '' 00:06:00.176 + rm /tmp/62.nkK /tmp/spdk_tgt_config.json.z1B 00:06:00.176 + exit 1 00:06:00.176 16:58:16 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:00.176 INFO: configuration change detected. 00:06:00.176 16:58:16 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:00.176 16:58:16 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:00.176 16:58:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.176 16:58:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.176 16:58:16 -- json_config/json_config.sh@360 -- # local ret=0 00:06:00.176 16:58:16 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:00.176 16:58:16 -- json_config/json_config.sh@370 -- # [[ -n 417731 ]] 00:06:00.176 16:58:16 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:00.176 16:58:16 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:00.176 16:58:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.176 16:58:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.176 16:58:16 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:00.176 16:58:16 -- json_config/json_config.sh@246 -- # uname -s 00:06:00.176 16:58:16 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:00.176 16:58:16 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:00.176 16:58:16 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:00.176 16:58:16 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:00.176 16:58:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:00.176 16:58:16 -- common/autotest_common.sh@10 -- # set +x 00:06:00.176 16:58:16 -- json_config/json_config.sh@376 -- # killprocess 417731 00:06:00.176 16:58:16 -- common/autotest_common.sh@926 -- # '[' -z 417731 ']' 00:06:00.176 16:58:16 -- common/autotest_common.sh@930 -- # kill -0 417731 00:06:00.176 16:58:16 -- common/autotest_common.sh@931 -- # uname 00:06:00.176 16:58:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.176 16:58:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 417731 00:06:00.176 16:58:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.176 16:58:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.176 16:58:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 417731' 00:06:00.176 killing process with pid 417731 00:06:00.176 16:58:16 -- common/autotest_common.sh@945 -- # kill 417731 00:06:00.176 16:58:16 -- common/autotest_common.sh@950 -- # wait 417731 00:06:02.078 16:58:17 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.078 16:58:17 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:02.078 16:58:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:02.078 16:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.078 16:58:17 -- json_config/json_config.sh@381 -- # return 0 00:06:02.078 16:58:17 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:02.078 INFO: Success 00:06:02.078 00:06:02.078 real 0m15.949s 00:06:02.078 user 0m18.154s 00:06:02.078 sys 0m2.091s 00:06:02.078 16:58:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.078 16:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.078 ************************************ 00:06:02.078 END TEST json_config 00:06:02.078 ************************************ 00:06:02.078 16:58:17 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:02.078 16:58:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.078 16:58:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.078 16:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.078 ************************************ 00:06:02.078 START TEST json_config_extra_key 00:06:02.078 ************************************ 00:06:02.078 16:58:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.078 16:58:17 -- nvmf/common.sh@7 -- # uname -s 00:06:02.078 16:58:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.078 16:58:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.078 16:58:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.078 16:58:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.078 16:58:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.078 16:58:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.078 16:58:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.078 16:58:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.078 16:58:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.078 16:58:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.078 16:58:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.078 16:58:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.078 16:58:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.078 16:58:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.078 16:58:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.078 16:58:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.078 16:58:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.078 16:58:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.078 16:58:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.078 16:58:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.078 16:58:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.078 16:58:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.078 16:58:17 -- paths/export.sh@5 -- # export PATH 00:06:02.078 16:58:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.078 16:58:17 -- nvmf/common.sh@46 -- # : 0 00:06:02.078 16:58:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:02.078 16:58:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:02.078 16:58:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:02.078 16:58:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.078 16:58:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.078 16:58:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:02.078 16:58:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:02.078 16:58:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:02.078 INFO: launching applications... 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=418674 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:02.078 Waiting for target to run... 00:06:02.078 16:58:17 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 418674 /var/tmp/spdk_tgt.sock 00:06:02.078 16:58:17 -- common/autotest_common.sh@819 -- # '[' -z 418674 ']' 00:06:02.078 16:58:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.078 16:58:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.078 16:58:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.078 16:58:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.078 16:58:17 -- common/autotest_common.sh@10 -- # set +x 00:06:02.078 [2024-07-20 16:58:17.902353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:02.079 [2024-07-20 16:58:17.902436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418674 ] 00:06:02.079 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.337 [2024-07-20 16:58:18.236947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.337 [2024-07-20 16:58:18.300625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.337 [2024-07-20 16:58:18.300790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.903 16:58:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.904 16:58:18 -- common/autotest_common.sh@852 -- # return 0 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:02.904 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:02.904 INFO: shutting down applications... 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 418674 ]] 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 418674 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@50 -- # kill -0 418674 00:06:02.904 16:58:18 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@50 -- # kill -0 418674 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:03.472 SPDK target shutdown done 00:06:03.472 16:58:19 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:03.472 Success 00:06:03.472 00:06:03.472 real 0m1.576s 00:06:03.472 user 0m1.575s 00:06:03.472 sys 0m0.443s 00:06:03.472 16:58:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.472 16:58:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.472 ************************************ 00:06:03.472 END TEST json_config_extra_key 00:06:03.472 ************************************ 00:06:03.472 16:58:19 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.472 16:58:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.472 16:58:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.472 16:58:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.472 ************************************ 00:06:03.472 START TEST alias_rpc 00:06:03.472 ************************************ 00:06:03.472 16:58:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.472 * Looking for test storage... 00:06:03.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:03.472 16:58:19 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.472 16:58:19 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=418947 00:06:03.472 16:58:19 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.472 16:58:19 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 418947 00:06:03.472 16:58:19 -- common/autotest_common.sh@819 -- # '[' -z 418947 ']' 00:06:03.472 16:58:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.472 16:58:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.472 16:58:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.472 16:58:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.472 16:58:19 -- common/autotest_common.sh@10 -- # set +x 00:06:03.472 [2024-07-20 16:58:19.497403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:03.472 [2024-07-20 16:58:19.497488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid418947 ] 00:06:03.472 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.472 [2024-07-20 16:58:19.555092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.731 [2024-07-20 16:58:19.639397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.731 [2024-07-20 16:58:19.639544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.298 16:58:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.298 16:58:20 -- common/autotest_common.sh@852 -- # return 0 00:06:04.298 16:58:20 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:04.556 16:58:20 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 418947 00:06:04.556 16:58:20 -- common/autotest_common.sh@926 -- # '[' -z 418947 ']' 00:06:04.556 16:58:20 -- common/autotest_common.sh@930 -- # kill -0 418947 00:06:04.556 16:58:20 -- common/autotest_common.sh@931 -- # uname 00:06:04.556 16:58:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.556 16:58:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 418947 00:06:04.556 16:58:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.556 16:58:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.556 16:58:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 418947' 00:06:04.556 killing process with pid 418947 00:06:04.556 16:58:20 -- common/autotest_common.sh@945 -- # kill 418947 00:06:04.556 16:58:20 -- common/autotest_common.sh@950 -- # wait 418947 00:06:05.123 00:06:05.123 real 0m1.702s 00:06:05.123 user 0m1.953s 00:06:05.123 sys 0m0.453s 00:06:05.123 16:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.123 16:58:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.123 ************************************ 00:06:05.123 END TEST alias_rpc 00:06:05.123 ************************************ 00:06:05.123 16:58:21 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:05.123 16:58:21 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:05.123 16:58:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.123 16:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.123 16:58:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.123 ************************************ 00:06:05.123 START TEST spdkcli_tcp 00:06:05.123 ************************************ 00:06:05.123 16:58:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:05.123 * Looking for test storage... 00:06:05.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:05.123 16:58:21 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:05.123 16:58:21 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:05.123 16:58:21 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:05.123 16:58:21 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.123 16:58:21 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.123 16:58:21 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.124 16:58:21 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.124 16:58:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:05.124 16:58:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.124 16:58:21 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=419181 00:06:05.124 16:58:21 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.124 16:58:21 -- spdkcli/tcp.sh@27 -- # waitforlisten 419181 00:06:05.124 16:58:21 -- common/autotest_common.sh@819 -- # '[' -z 419181 ']' 00:06:05.124 16:58:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.124 16:58:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.124 16:58:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.124 16:58:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.124 16:58:21 -- common/autotest_common.sh@10 -- # set +x 00:06:05.124 [2024-07-20 16:58:21.231603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:05.124 [2024-07-20 16:58:21.231698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419181 ] 00:06:05.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.383 [2024-07-20 16:58:21.291009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.383 [2024-07-20 16:58:21.373250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.383 [2024-07-20 16:58:21.373442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.383 [2024-07-20 16:58:21.373447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.317 16:58:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.317 16:58:22 -- common/autotest_common.sh@852 -- # return 0 00:06:06.317 16:58:22 -- spdkcli/tcp.sh@31 -- # socat_pid=419322 00:06:06.317 16:58:22 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:06.317 16:58:22 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.317 [ 00:06:06.317 "bdev_malloc_delete", 00:06:06.317 "bdev_malloc_create", 00:06:06.317 "bdev_null_resize", 00:06:06.317 "bdev_null_delete", 00:06:06.317 "bdev_null_create", 00:06:06.317 "bdev_nvme_cuse_unregister", 00:06:06.317 "bdev_nvme_cuse_register", 00:06:06.317 "bdev_opal_new_user", 00:06:06.317 "bdev_opal_set_lock_state", 00:06:06.318 "bdev_opal_delete", 00:06:06.318 "bdev_opal_get_info", 00:06:06.318 "bdev_opal_create", 00:06:06.318 "bdev_nvme_opal_revert", 00:06:06.318 "bdev_nvme_opal_init", 00:06:06.318 "bdev_nvme_send_cmd", 00:06:06.318 "bdev_nvme_get_path_iostat", 00:06:06.318 "bdev_nvme_get_mdns_discovery_info", 00:06:06.318 "bdev_nvme_stop_mdns_discovery", 00:06:06.318 "bdev_nvme_start_mdns_discovery", 00:06:06.318 "bdev_nvme_set_multipath_policy", 00:06:06.318 "bdev_nvme_set_preferred_path", 00:06:06.318 "bdev_nvme_get_io_paths", 00:06:06.318 "bdev_nvme_remove_error_injection", 00:06:06.318 "bdev_nvme_add_error_injection", 00:06:06.318 "bdev_nvme_get_discovery_info", 00:06:06.318 "bdev_nvme_stop_discovery", 00:06:06.318 "bdev_nvme_start_discovery", 00:06:06.318 "bdev_nvme_get_controller_health_info", 00:06:06.318 "bdev_nvme_disable_controller", 00:06:06.318 "bdev_nvme_enable_controller", 00:06:06.318 "bdev_nvme_reset_controller", 00:06:06.318 "bdev_nvme_get_transport_statistics", 00:06:06.318 "bdev_nvme_apply_firmware", 00:06:06.318 "bdev_nvme_detach_controller", 00:06:06.318 "bdev_nvme_get_controllers", 00:06:06.318 "bdev_nvme_attach_controller", 00:06:06.318 "bdev_nvme_set_hotplug", 00:06:06.318 "bdev_nvme_set_options", 00:06:06.318 "bdev_passthru_delete", 00:06:06.318 "bdev_passthru_create", 00:06:06.318 "bdev_lvol_grow_lvstore", 00:06:06.318 "bdev_lvol_get_lvols", 00:06:06.318 "bdev_lvol_get_lvstores", 00:06:06.318 "bdev_lvol_delete", 00:06:06.318 "bdev_lvol_set_read_only", 00:06:06.318 "bdev_lvol_resize", 00:06:06.318 "bdev_lvol_decouple_parent", 00:06:06.318 "bdev_lvol_inflate", 00:06:06.318 "bdev_lvol_rename", 00:06:06.318 "bdev_lvol_clone_bdev", 00:06:06.318 "bdev_lvol_clone", 00:06:06.318 "bdev_lvol_snapshot", 00:06:06.318 "bdev_lvol_create", 00:06:06.318 "bdev_lvol_delete_lvstore", 00:06:06.318 "bdev_lvol_rename_lvstore", 00:06:06.318 "bdev_lvol_create_lvstore", 00:06:06.318 "bdev_raid_set_options", 00:06:06.318 "bdev_raid_remove_base_bdev", 00:06:06.318 "bdev_raid_add_base_bdev", 00:06:06.318 "bdev_raid_delete", 00:06:06.318 "bdev_raid_create", 00:06:06.318 "bdev_raid_get_bdevs", 00:06:06.318 "bdev_error_inject_error", 00:06:06.318 "bdev_error_delete", 00:06:06.318 "bdev_error_create", 00:06:06.318 "bdev_split_delete", 00:06:06.318 "bdev_split_create", 00:06:06.318 "bdev_delay_delete", 00:06:06.318 "bdev_delay_create", 00:06:06.318 "bdev_delay_update_latency", 00:06:06.318 "bdev_zone_block_delete", 00:06:06.318 "bdev_zone_block_create", 00:06:06.318 "blobfs_create", 00:06:06.318 "blobfs_detect", 00:06:06.318 "blobfs_set_cache_size", 00:06:06.318 "bdev_aio_delete", 00:06:06.318 "bdev_aio_rescan", 00:06:06.318 "bdev_aio_create", 00:06:06.318 "bdev_ftl_set_property", 00:06:06.318 "bdev_ftl_get_properties", 00:06:06.318 "bdev_ftl_get_stats", 00:06:06.318 "bdev_ftl_unmap", 00:06:06.318 "bdev_ftl_unload", 00:06:06.318 "bdev_ftl_delete", 00:06:06.318 "bdev_ftl_load", 00:06:06.318 "bdev_ftl_create", 00:06:06.318 "bdev_virtio_attach_controller", 00:06:06.318 "bdev_virtio_scsi_get_devices", 00:06:06.318 "bdev_virtio_detach_controller", 00:06:06.318 "bdev_virtio_blk_set_hotplug", 00:06:06.318 "bdev_iscsi_delete", 00:06:06.318 "bdev_iscsi_create", 00:06:06.318 "bdev_iscsi_set_options", 00:06:06.318 "accel_error_inject_error", 00:06:06.318 "ioat_scan_accel_module", 00:06:06.318 "dsa_scan_accel_module", 00:06:06.318 "iaa_scan_accel_module", 00:06:06.318 "vfu_virtio_create_scsi_endpoint", 00:06:06.318 "vfu_virtio_scsi_remove_target", 00:06:06.318 "vfu_virtio_scsi_add_target", 00:06:06.318 "vfu_virtio_create_blk_endpoint", 00:06:06.318 "vfu_virtio_delete_endpoint", 00:06:06.318 "iscsi_set_options", 00:06:06.318 "iscsi_get_auth_groups", 00:06:06.318 "iscsi_auth_group_remove_secret", 00:06:06.318 "iscsi_auth_group_add_secret", 00:06:06.318 "iscsi_delete_auth_group", 00:06:06.318 "iscsi_create_auth_group", 00:06:06.318 "iscsi_set_discovery_auth", 00:06:06.318 "iscsi_get_options", 00:06:06.318 "iscsi_target_node_request_logout", 00:06:06.318 "iscsi_target_node_set_redirect", 00:06:06.318 "iscsi_target_node_set_auth", 00:06:06.318 "iscsi_target_node_add_lun", 00:06:06.318 "iscsi_get_connections", 00:06:06.318 "iscsi_portal_group_set_auth", 00:06:06.318 "iscsi_start_portal_group", 00:06:06.318 "iscsi_delete_portal_group", 00:06:06.318 "iscsi_create_portal_group", 00:06:06.318 "iscsi_get_portal_groups", 00:06:06.318 "iscsi_delete_target_node", 00:06:06.318 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.318 "iscsi_target_node_add_pg_ig_maps", 00:06:06.318 "iscsi_create_target_node", 00:06:06.318 "iscsi_get_target_nodes", 00:06:06.318 "iscsi_delete_initiator_group", 00:06:06.318 "iscsi_initiator_group_remove_initiators", 00:06:06.318 "iscsi_initiator_group_add_initiators", 00:06:06.318 "iscsi_create_initiator_group", 00:06:06.318 "iscsi_get_initiator_groups", 00:06:06.318 "nvmf_set_crdt", 00:06:06.318 "nvmf_set_config", 00:06:06.318 "nvmf_set_max_subsystems", 00:06:06.318 "nvmf_subsystem_get_listeners", 00:06:06.318 "nvmf_subsystem_get_qpairs", 00:06:06.318 "nvmf_subsystem_get_controllers", 00:06:06.318 "nvmf_get_stats", 00:06:06.318 "nvmf_get_transports", 00:06:06.318 "nvmf_create_transport", 00:06:06.318 "nvmf_get_targets", 00:06:06.318 "nvmf_delete_target", 00:06:06.318 "nvmf_create_target", 00:06:06.318 "nvmf_subsystem_allow_any_host", 00:06:06.318 "nvmf_subsystem_remove_host", 00:06:06.318 "nvmf_subsystem_add_host", 00:06:06.318 "nvmf_subsystem_remove_ns", 00:06:06.318 "nvmf_subsystem_add_ns", 00:06:06.318 "nvmf_subsystem_listener_set_ana_state", 00:06:06.318 "nvmf_discovery_get_referrals", 00:06:06.318 "nvmf_discovery_remove_referral", 00:06:06.318 "nvmf_discovery_add_referral", 00:06:06.318 "nvmf_subsystem_remove_listener", 00:06:06.318 "nvmf_subsystem_add_listener", 00:06:06.318 "nvmf_delete_subsystem", 00:06:06.318 "nvmf_create_subsystem", 00:06:06.318 "nvmf_get_subsystems", 00:06:06.318 "env_dpdk_get_mem_stats", 00:06:06.318 "nbd_get_disks", 00:06:06.318 "nbd_stop_disk", 00:06:06.318 "nbd_start_disk", 00:06:06.318 "ublk_recover_disk", 00:06:06.318 "ublk_get_disks", 00:06:06.318 "ublk_stop_disk", 00:06:06.318 "ublk_start_disk", 00:06:06.318 "ublk_destroy_target", 00:06:06.318 "ublk_create_target", 00:06:06.318 "virtio_blk_create_transport", 00:06:06.318 "virtio_blk_get_transports", 00:06:06.318 "vhost_controller_set_coalescing", 00:06:06.318 "vhost_get_controllers", 00:06:06.318 "vhost_delete_controller", 00:06:06.318 "vhost_create_blk_controller", 00:06:06.318 "vhost_scsi_controller_remove_target", 00:06:06.318 "vhost_scsi_controller_add_target", 00:06:06.318 "vhost_start_scsi_controller", 00:06:06.318 "vhost_create_scsi_controller", 00:06:06.318 "thread_set_cpumask", 00:06:06.318 "framework_get_scheduler", 00:06:06.318 "framework_set_scheduler", 00:06:06.318 "framework_get_reactors", 00:06:06.318 "thread_get_io_channels", 00:06:06.318 "thread_get_pollers", 00:06:06.318 "thread_get_stats", 00:06:06.318 "framework_monitor_context_switch", 00:06:06.318 "spdk_kill_instance", 00:06:06.318 "log_enable_timestamps", 00:06:06.318 "log_get_flags", 00:06:06.318 "log_clear_flag", 00:06:06.318 "log_set_flag", 00:06:06.318 "log_get_level", 00:06:06.318 "log_set_level", 00:06:06.318 "log_get_print_level", 00:06:06.318 "log_set_print_level", 00:06:06.318 "framework_enable_cpumask_locks", 00:06:06.318 "framework_disable_cpumask_locks", 00:06:06.318 "framework_wait_init", 00:06:06.318 "framework_start_init", 00:06:06.318 "scsi_get_devices", 00:06:06.318 "bdev_get_histogram", 00:06:06.318 "bdev_enable_histogram", 00:06:06.318 "bdev_set_qos_limit", 00:06:06.318 "bdev_set_qd_sampling_period", 00:06:06.318 "bdev_get_bdevs", 00:06:06.318 "bdev_reset_iostat", 00:06:06.318 "bdev_get_iostat", 00:06:06.318 "bdev_examine", 00:06:06.318 "bdev_wait_for_examine", 00:06:06.318 "bdev_set_options", 00:06:06.318 "notify_get_notifications", 00:06:06.318 "notify_get_types", 00:06:06.318 "accel_get_stats", 00:06:06.318 "accel_set_options", 00:06:06.318 "accel_set_driver", 00:06:06.318 "accel_crypto_key_destroy", 00:06:06.318 "accel_crypto_keys_get", 00:06:06.318 "accel_crypto_key_create", 00:06:06.318 "accel_assign_opc", 00:06:06.318 "accel_get_module_info", 00:06:06.318 "accel_get_opc_assignments", 00:06:06.318 "vmd_rescan", 00:06:06.318 "vmd_remove_device", 00:06:06.318 "vmd_enable", 00:06:06.318 "sock_set_default_impl", 00:06:06.318 "sock_impl_set_options", 00:06:06.318 "sock_impl_get_options", 00:06:06.318 "iobuf_get_stats", 00:06:06.318 "iobuf_set_options", 00:06:06.318 "framework_get_pci_devices", 00:06:06.318 "framework_get_config", 00:06:06.318 "framework_get_subsystems", 00:06:06.318 "vfu_tgt_set_base_path", 00:06:06.318 "trace_get_info", 00:06:06.318 "trace_get_tpoint_group_mask", 00:06:06.318 "trace_disable_tpoint_group", 00:06:06.318 "trace_enable_tpoint_group", 00:06:06.318 "trace_clear_tpoint_mask", 00:06:06.318 "trace_set_tpoint_mask", 00:06:06.318 "spdk_get_version", 00:06:06.318 "rpc_get_methods" 00:06:06.318 ] 00:06:06.318 16:58:22 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.318 16:58:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:06.318 16:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.318 16:58:22 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.318 16:58:22 -- spdkcli/tcp.sh@38 -- # killprocess 419181 00:06:06.318 16:58:22 -- common/autotest_common.sh@926 -- # '[' -z 419181 ']' 00:06:06.318 16:58:22 -- common/autotest_common.sh@930 -- # kill -0 419181 00:06:06.318 16:58:22 -- common/autotest_common.sh@931 -- # uname 00:06:06.318 16:58:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.318 16:58:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 419181 00:06:06.318 16:58:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:06.318 16:58:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:06.318 16:58:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 419181' 00:06:06.319 killing process with pid 419181 00:06:06.319 16:58:22 -- common/autotest_common.sh@945 -- # kill 419181 00:06:06.319 16:58:22 -- common/autotest_common.sh@950 -- # wait 419181 00:06:06.883 00:06:06.883 real 0m1.697s 00:06:06.883 user 0m3.320s 00:06:06.883 sys 0m0.455s 00:06:06.883 16:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.883 16:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 ************************************ 00:06:06.883 END TEST spdkcli_tcp 00:06:06.883 ************************************ 00:06:06.883 16:58:22 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.883 16:58:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.883 16:58:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.883 16:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 ************************************ 00:06:06.883 START TEST dpdk_mem_utility 00:06:06.883 ************************************ 00:06:06.883 16:58:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.883 * Looking for test storage... 00:06:06.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:06.883 16:58:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.883 16:58:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=419514 00:06:06.883 16:58:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.883 16:58:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 419514 00:06:06.883 16:58:22 -- common/autotest_common.sh@819 -- # '[' -z 419514 ']' 00:06:06.883 16:58:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.883 16:58:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.883 16:58:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.883 16:58:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.883 16:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 [2024-07-20 16:58:22.953492] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:06.883 [2024-07-20 16:58:22.953574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419514 ] 00:06:06.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.883 [2024-07-20 16:58:23.011072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.140 [2024-07-20 16:58:23.095616] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.140 [2024-07-20 16:58:23.095808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.073 16:58:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.073 16:58:23 -- common/autotest_common.sh@852 -- # return 0 00:06:08.073 16:58:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.073 16:58:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.073 16:58:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.073 16:58:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.073 { 00:06:08.073 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.073 } 00:06:08.073 16:58:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.073 16:58:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:08.073 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:08.073 1 heaps totaling size 814.000000 MiB 00:06:08.073 size: 814.000000 MiB heap id: 0 00:06:08.073 end heaps---------- 00:06:08.073 8 mempools totaling size 598.116089 MiB 00:06:08.073 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.073 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.073 size: 84.521057 MiB name: bdev_io_419514 00:06:08.073 size: 51.011292 MiB name: evtpool_419514 00:06:08.073 size: 50.003479 MiB name: msgpool_419514 00:06:08.073 size: 21.763794 MiB name: PDU_Pool 00:06:08.073 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.073 size: 0.026123 MiB name: Session_Pool 00:06:08.073 end mempools------- 00:06:08.073 6 memzones totaling size 4.142822 MiB 00:06:08.073 size: 1.000366 MiB name: RG_ring_0_419514 00:06:08.073 size: 1.000366 MiB name: RG_ring_1_419514 00:06:08.073 size: 1.000366 MiB name: RG_ring_4_419514 00:06:08.073 size: 1.000366 MiB name: RG_ring_5_419514 00:06:08.073 size: 0.125366 MiB name: RG_ring_2_419514 00:06:08.073 size: 0.015991 MiB name: RG_ring_3_419514 00:06:08.073 end memzones------- 00:06:08.073 16:58:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.073 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:08.073 list of free elements. size: 12.519348 MiB 00:06:08.073 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:08.073 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:08.073 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:08.073 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:08.073 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:08.073 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:08.073 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:08.073 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:08.073 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:08.073 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:08.073 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:08.073 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:08.073 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:08.073 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:08.073 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:08.073 list of standard malloc elements. size: 199.218079 MiB 00:06:08.073 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:08.073 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:08.073 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.073 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:08.073 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:08.073 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:08.073 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:08.073 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.073 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:08.073 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:08.073 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:08.073 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:08.073 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:08.073 list of memzone associated elements. size: 602.262573 MiB 00:06:08.073 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:08.073 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.073 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:08.073 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.073 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:08.073 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_419514_0 00:06:08.073 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:08.073 associated memzone info: size: 48.002930 MiB name: MP_evtpool_419514_0 00:06:08.073 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:08.074 associated memzone info: size: 48.002930 MiB name: MP_msgpool_419514_0 00:06:08.074 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:08.074 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.074 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:08.074 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.074 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:08.074 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_419514 00:06:08.074 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:08.074 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_419514 00:06:08.074 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:08.074 associated memzone info: size: 1.007996 MiB name: MP_evtpool_419514 00:06:08.074 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:08.074 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.074 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:08.074 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.074 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:08.074 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.074 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:08.074 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.074 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:08.074 associated memzone info: size: 1.000366 MiB name: RG_ring_0_419514 00:06:08.074 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:08.074 associated memzone info: size: 1.000366 MiB name: RG_ring_1_419514 00:06:08.074 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:08.074 associated memzone info: size: 1.000366 MiB name: RG_ring_4_419514 00:06:08.074 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:08.074 associated memzone info: size: 1.000366 MiB name: RG_ring_5_419514 00:06:08.074 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:08.074 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_419514 00:06:08.074 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:08.074 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.074 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:08.074 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.074 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:08.074 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.074 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:08.074 associated memzone info: size: 0.125366 MiB name: RG_ring_2_419514 00:06:08.074 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:08.074 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.074 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:08.074 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.074 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:08.074 associated memzone info: size: 0.015991 MiB name: RG_ring_3_419514 00:06:08.074 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:08.074 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.074 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:08.074 associated memzone info: size: 0.000183 MiB name: MP_msgpool_419514 00:06:08.074 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:08.074 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_419514 00:06:08.074 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:08.074 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.074 16:58:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.074 16:58:23 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 419514 00:06:08.074 16:58:23 -- common/autotest_common.sh@926 -- # '[' -z 419514 ']' 00:06:08.074 16:58:23 -- common/autotest_common.sh@930 -- # kill -0 419514 00:06:08.074 16:58:23 -- common/autotest_common.sh@931 -- # uname 00:06:08.074 16:58:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.074 16:58:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 419514 00:06:08.074 16:58:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.074 16:58:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.074 16:58:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 419514' 00:06:08.074 killing process with pid 419514 00:06:08.074 16:58:24 -- common/autotest_common.sh@945 -- # kill 419514 00:06:08.074 16:58:24 -- common/autotest_common.sh@950 -- # wait 419514 00:06:08.332 00:06:08.332 real 0m1.565s 00:06:08.332 user 0m1.713s 00:06:08.332 sys 0m0.443s 00:06:08.332 16:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.332 16:58:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.332 ************************************ 00:06:08.332 END TEST dpdk_mem_utility 00:06:08.332 ************************************ 00:06:08.332 16:58:24 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:08.332 16:58:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.332 16:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.332 16:58:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.332 ************************************ 00:06:08.332 START TEST event 00:06:08.332 ************************************ 00:06:08.332 16:58:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:08.589 * Looking for test storage... 00:06:08.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:08.589 16:58:24 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:08.589 16:58:24 -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.589 16:58:24 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.589 16:58:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:08.589 16:58:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.589 16:58:24 -- common/autotest_common.sh@10 -- # set +x 00:06:08.589 ************************************ 00:06:08.589 START TEST event_perf 00:06:08.589 ************************************ 00:06:08.589 16:58:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.589 Running I/O for 1 seconds...[2024-07-20 16:58:24.516252] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.589 [2024-07-20 16:58:24.516330] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419710 ] 00:06:08.589 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.589 [2024-07-20 16:58:24.581537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.589 [2024-07-20 16:58:24.673036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.589 [2024-07-20 16:58:24.673057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.589 [2024-07-20 16:58:24.673078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.589 [2024-07-20 16:58:24.673081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.955 Running I/O for 1 seconds... 00:06:09.955 lcore 0: 229365 00:06:09.955 lcore 1: 229362 00:06:09.955 lcore 2: 229363 00:06:09.955 lcore 3: 229364 00:06:09.955 done. 00:06:09.955 00:06:09.955 real 0m1.254s 00:06:09.955 user 0m4.160s 00:06:09.955 sys 0m0.086s 00:06:09.955 16:58:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.955 16:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:09.955 ************************************ 00:06:09.955 END TEST event_perf 00:06:09.955 ************************************ 00:06:09.955 16:58:25 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:09.955 16:58:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:09.955 16:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.955 16:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:09.955 ************************************ 00:06:09.955 START TEST event_reactor 00:06:09.955 ************************************ 00:06:09.955 16:58:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:09.955 [2024-07-20 16:58:25.794584] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:09.955 [2024-07-20 16:58:25.794666] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419873 ] 00:06:09.955 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.955 [2024-07-20 16:58:25.860052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.955 [2024-07-20 16:58:25.950071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.887 test_start 00:06:10.887 oneshot 00:06:10.887 tick 100 00:06:10.887 tick 100 00:06:10.887 tick 250 00:06:10.887 tick 100 00:06:10.887 tick 100 00:06:10.887 tick 100 00:06:10.887 tick 250 00:06:10.887 tick 500 00:06:10.887 tick 100 00:06:10.887 tick 100 00:06:10.887 tick 250 00:06:10.887 tick 100 00:06:10.887 tick 100 00:06:10.887 test_end 00:06:10.887 00:06:10.887 real 0m1.250s 00:06:10.887 user 0m1.163s 00:06:10.887 sys 0m0.081s 00:06:10.887 16:58:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.887 16:58:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.887 ************************************ 00:06:10.887 END TEST event_reactor 00:06:10.887 ************************************ 00:06:11.144 16:58:27 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.144 16:58:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:11.144 16:58:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.144 16:58:27 -- common/autotest_common.sh@10 -- # set +x 00:06:11.144 ************************************ 00:06:11.144 START TEST event_reactor_perf 00:06:11.144 ************************************ 00:06:11.144 16:58:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.144 [2024-07-20 16:58:27.075979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:11.144 [2024-07-20 16:58:27.076061] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420029 ] 00:06:11.144 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.144 [2024-07-20 16:58:27.135473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.144 [2024-07-20 16:58:27.224353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.515 test_start 00:06:12.515 test_end 00:06:12.515 Performance: 349379 events per second 00:06:12.515 00:06:12.515 real 0m1.239s 00:06:12.515 user 0m1.155s 00:06:12.515 sys 0m0.079s 00:06:12.515 16:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.515 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 ************************************ 00:06:12.515 END TEST event_reactor_perf 00:06:12.515 ************************************ 00:06:12.515 16:58:28 -- event/event.sh@49 -- # uname -s 00:06:12.515 16:58:28 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.515 16:58:28 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.515 16:58:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.515 16:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.515 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 ************************************ 00:06:12.515 START TEST event_scheduler 00:06:12.515 ************************************ 00:06:12.515 16:58:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.515 * Looking for test storage... 00:06:12.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@35 -- # scheduler_pid=420213 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@37 -- # waitforlisten 420213 00:06:12.515 16:58:28 -- common/autotest_common.sh@819 -- # '[' -z 420213 ']' 00:06:12.515 16:58:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.515 16:58:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.515 16:58:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.515 16:58:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.515 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 [2024-07-20 16:58:28.422384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:12.515 [2024-07-20 16:58:28.422463] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420213 ] 00:06:12.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.515 [2024-07-20 16:58:28.484576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.515 [2024-07-20 16:58:28.572279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.515 [2024-07-20 16:58:28.572334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.515 [2024-07-20 16:58:28.572401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.515 [2024-07-20 16:58:28.572403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.515 16:58:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.515 16:58:28 -- common/autotest_common.sh@852 -- # return 0 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.515 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.515 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.515 POWER: Env isn't set yet! 00:06:12.515 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:12.515 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:12.515 POWER: Cannot get available frequencies of lcore 0 00:06:12.515 POWER: Attempting to initialise PSTAT power management... 00:06:12.515 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:12.515 POWER: Initialized successfully for lcore 0 power management 00:06:12.515 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:12.515 POWER: Initialized successfully for lcore 1 power management 00:06:12.515 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:12.515 POWER: Initialized successfully for lcore 2 power management 00:06:12.515 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:12.515 POWER: Initialized successfully for lcore 3 power management 00:06:12.515 [2024-07-20 16:58:28.661987] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.515 [2024-07-20 16:58:28.662005] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.515 [2024-07-20 16:58:28.662016] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.515 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.515 16:58:28 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.515 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.515 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.777 [2024-07-20 16:58:28.763287] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.777 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.778 16:58:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.778 16:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 ************************************ 00:06:12.778 START TEST scheduler_create_thread 00:06:12.778 ************************************ 00:06:12.778 16:58:28 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 2 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 3 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 4 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 5 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 6 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 7 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 8 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 9 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 10 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:12.778 16:58:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.778 16:58:28 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.778 16:58:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.778 16:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.726 16:58:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:13.726 16:58:29 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:13.726 16:58:29 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:13.726 16:58:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:13.726 16:58:29 -- common/autotest_common.sh@10 -- # set +x 00:06:15.096 16:58:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.096 00:06:15.096 real 0m2.136s 00:06:15.096 user 0m0.011s 00:06:15.096 sys 0m0.002s 00:06:15.096 16:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.096 16:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:15.096 ************************************ 00:06:15.096 END TEST scheduler_create_thread 00:06:15.096 ************************************ 00:06:15.096 16:58:30 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.096 16:58:30 -- scheduler/scheduler.sh@46 -- # killprocess 420213 00:06:15.096 16:58:30 -- common/autotest_common.sh@926 -- # '[' -z 420213 ']' 00:06:15.096 16:58:30 -- common/autotest_common.sh@930 -- # kill -0 420213 00:06:15.096 16:58:30 -- common/autotest_common.sh@931 -- # uname 00:06:15.096 16:58:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.096 16:58:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 420213 00:06:15.096 16:58:30 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:15.096 16:58:30 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:15.096 16:58:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 420213' 00:06:15.096 killing process with pid 420213 00:06:15.096 16:58:30 -- common/autotest_common.sh@945 -- # kill 420213 00:06:15.096 16:58:30 -- common/autotest_common.sh@950 -- # wait 420213 00:06:15.353 [2024-07-20 16:58:31.384493] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:15.611 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:15.611 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:15.611 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:15.611 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:15.611 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:15.611 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:15.611 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:15.611 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:15.611 00:06:15.611 real 0m3.265s 00:06:15.611 user 0m4.633s 00:06:15.611 sys 0m0.311s 00:06:15.611 16:58:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.611 16:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.611 ************************************ 00:06:15.611 END TEST event_scheduler 00:06:15.611 ************************************ 00:06:15.611 16:58:31 -- event/event.sh@51 -- # modprobe -n nbd 00:06:15.611 16:58:31 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:15.611 16:58:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.611 16:58:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.611 16:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.611 ************************************ 00:06:15.611 START TEST app_repeat 00:06:15.611 ************************************ 00:06:15.611 16:58:31 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:15.611 16:58:31 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.611 16:58:31 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.611 16:58:31 -- event/event.sh@13 -- # local nbd_list 00:06:15.611 16:58:31 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.611 16:58:31 -- event/event.sh@14 -- # local bdev_list 00:06:15.611 16:58:31 -- event/event.sh@15 -- # local repeat_times=4 00:06:15.611 16:58:31 -- event/event.sh@17 -- # modprobe nbd 00:06:15.611 16:58:31 -- event/event.sh@19 -- # repeat_pid=420670 00:06:15.611 16:58:31 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:15.611 16:58:31 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.611 16:58:31 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 420670' 00:06:15.611 Process app_repeat pid: 420670 00:06:15.611 16:58:31 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.611 16:58:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:15.611 spdk_app_start Round 0 00:06:15.611 16:58:31 -- event/event.sh@25 -- # waitforlisten 420670 /var/tmp/spdk-nbd.sock 00:06:15.611 16:58:31 -- common/autotest_common.sh@819 -- # '[' -z 420670 ']' 00:06:15.611 16:58:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.611 16:58:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.611 16:58:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.611 16:58:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.611 16:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:15.611 [2024-07-20 16:58:31.648505] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:15.611 [2024-07-20 16:58:31.648596] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid420670 ] 00:06:15.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.611 [2024-07-20 16:58:31.707061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.868 [2024-07-20 16:58:31.796741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.868 [2024-07-20 16:58:31.796744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.799 16:58:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.799 16:58:32 -- common/autotest_common.sh@852 -- # return 0 00:06:16.800 16:58:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.800 Malloc0 00:06:16.800 16:58:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.057 Malloc1 00:06:17.057 16:58:33 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.057 16:58:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.313 /dev/nbd0 00:06:17.313 16:58:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.313 16:58:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.313 16:58:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:17.313 16:58:33 -- common/autotest_common.sh@857 -- # local i 00:06:17.313 16:58:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.313 16:58:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.313 16:58:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:17.313 16:58:33 -- common/autotest_common.sh@861 -- # break 00:06:17.313 16:58:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.313 16:58:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.313 16:58:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.313 1+0 records in 00:06:17.313 1+0 records out 00:06:17.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179895 s, 22.8 MB/s 00:06:17.313 16:58:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.313 16:58:33 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.313 16:58:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.313 16:58:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.313 16:58:33 -- common/autotest_common.sh@877 -- # return 0 00:06:17.313 16:58:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.313 16:58:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.313 16:58:33 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.570 /dev/nbd1 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.570 16:58:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:17.570 16:58:33 -- common/autotest_common.sh@857 -- # local i 00:06:17.570 16:58:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.570 16:58:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.570 16:58:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:17.570 16:58:33 -- common/autotest_common.sh@861 -- # break 00:06:17.570 16:58:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.570 16:58:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.570 16:58:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.570 1+0 records in 00:06:17.570 1+0 records out 00:06:17.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167217 s, 24.5 MB/s 00:06:17.570 16:58:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.570 16:58:33 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.570 16:58:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.570 16:58:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.570 16:58:33 -- common/autotest_common.sh@877 -- # return 0 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.570 16:58:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.828 { 00:06:17.828 "nbd_device": "/dev/nbd0", 00:06:17.828 "bdev_name": "Malloc0" 00:06:17.828 }, 00:06:17.828 { 00:06:17.828 "nbd_device": "/dev/nbd1", 00:06:17.828 "bdev_name": "Malloc1" 00:06:17.828 } 00:06:17.828 ]' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.828 { 00:06:17.828 "nbd_device": "/dev/nbd0", 00:06:17.828 "bdev_name": "Malloc0" 00:06:17.828 }, 00:06:17.828 { 00:06:17.828 "nbd_device": "/dev/nbd1", 00:06:17.828 "bdev_name": "Malloc1" 00:06:17.828 } 00:06:17.828 ]' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.828 /dev/nbd1' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.828 /dev/nbd1' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.828 256+0 records in 00:06:17.828 256+0 records out 00:06:17.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498352 s, 210 MB/s 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.828 256+0 records in 00:06:17.828 256+0 records out 00:06:17.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204515 s, 51.3 MB/s 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.828 256+0 records in 00:06:17.828 256+0 records out 00:06:17.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223783 s, 46.9 MB/s 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.828 16:58:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@41 -- # break 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.086 16:58:34 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@41 -- # break 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.343 16:58:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.600 16:58:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.600 16:58:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.600 16:58:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.600 16:58:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.600 16:58:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@65 -- # true 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.857 16:58:34 -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.857 16:58:34 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.115 16:58:35 -- event/event.sh@35 -- # sleep 3 00:06:19.115 [2024-07-20 16:58:35.243826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.372 [2024-07-20 16:58:35.330645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.372 [2024-07-20 16:58:35.330646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.372 [2024-07-20 16:58:35.391760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.372 [2024-07-20 16:58:35.391871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.899 16:58:38 -- event/event.sh@23 -- # for i in {0..2} 00:06:21.899 16:58:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.899 spdk_app_start Round 1 00:06:21.899 16:58:38 -- event/event.sh@25 -- # waitforlisten 420670 /var/tmp/spdk-nbd.sock 00:06:21.899 16:58:38 -- common/autotest_common.sh@819 -- # '[' -z 420670 ']' 00:06:21.899 16:58:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.899 16:58:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.899 16:58:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.899 16:58:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.899 16:58:38 -- common/autotest_common.sh@10 -- # set +x 00:06:22.157 16:58:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.157 16:58:38 -- common/autotest_common.sh@852 -- # return 0 00:06:22.157 16:58:38 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.415 Malloc0 00:06:22.415 16:58:38 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.674 Malloc1 00:06:22.674 16:58:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@12 -- # local i 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.674 16:58:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.933 /dev/nbd0 00:06:22.933 16:58:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.933 16:58:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.933 16:58:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:22.933 16:58:39 -- common/autotest_common.sh@857 -- # local i 00:06:22.933 16:58:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:22.933 16:58:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:22.933 16:58:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:22.933 16:58:39 -- common/autotest_common.sh@861 -- # break 00:06:22.933 16:58:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:22.933 16:58:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:22.933 16:58:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.933 1+0 records in 00:06:22.933 1+0 records out 00:06:22.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000132437 s, 30.9 MB/s 00:06:22.933 16:58:39 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.933 16:58:39 -- common/autotest_common.sh@874 -- # size=4096 00:06:22.933 16:58:39 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:22.933 16:58:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:22.933 16:58:39 -- common/autotest_common.sh@877 -- # return 0 00:06:22.933 16:58:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.933 16:58:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.933 16:58:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.191 /dev/nbd1 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.191 16:58:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:23.191 16:58:39 -- common/autotest_common.sh@857 -- # local i 00:06:23.191 16:58:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:23.191 16:58:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:23.191 16:58:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:23.191 16:58:39 -- common/autotest_common.sh@861 -- # break 00:06:23.191 16:58:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:23.191 16:58:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:23.191 16:58:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.191 1+0 records in 00:06:23.191 1+0 records out 00:06:23.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199571 s, 20.5 MB/s 00:06:23.191 16:58:39 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.191 16:58:39 -- common/autotest_common.sh@874 -- # size=4096 00:06:23.191 16:58:39 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.191 16:58:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:23.191 16:58:39 -- common/autotest_common.sh@877 -- # return 0 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.191 16:58:39 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.448 16:58:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.448 { 00:06:23.448 "nbd_device": "/dev/nbd0", 00:06:23.448 "bdev_name": "Malloc0" 00:06:23.448 }, 00:06:23.448 { 00:06:23.448 "nbd_device": "/dev/nbd1", 00:06:23.448 "bdev_name": "Malloc1" 00:06:23.448 } 00:06:23.448 ]' 00:06:23.448 16:58:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.448 { 00:06:23.448 "nbd_device": "/dev/nbd0", 00:06:23.448 "bdev_name": "Malloc0" 00:06:23.448 }, 00:06:23.448 { 00:06:23.448 "nbd_device": "/dev/nbd1", 00:06:23.448 "bdev_name": "Malloc1" 00:06:23.449 } 00:06:23.449 ]' 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.449 /dev/nbd1' 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.449 /dev/nbd1' 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.449 256+0 records in 00:06:23.449 256+0 records out 00:06:23.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505516 s, 207 MB/s 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.449 256+0 records in 00:06:23.449 256+0 records out 00:06:23.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238877 s, 43.9 MB/s 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.449 16:58:39 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.706 256+0 records in 00:06:23.706 256+0 records out 00:06:23.706 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244989 s, 42.8 MB/s 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@51 -- # local i 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.706 16:58:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@41 -- # break 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.963 16:58:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@41 -- # break 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.220 16:58:40 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@65 -- # true 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.477 16:58:40 -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.477 16:58:40 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.733 16:58:40 -- event/event.sh@35 -- # sleep 3 00:06:24.991 [2024-07-20 16:58:40.915429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.991 [2024-07-20 16:58:41.003950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.991 [2024-07-20 16:58:41.003955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.991 [2024-07-20 16:58:41.064821] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.991 [2024-07-20 16:58:41.064907] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.268 16:58:43 -- event/event.sh@23 -- # for i in {0..2} 00:06:28.268 16:58:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.268 spdk_app_start Round 2 00:06:28.268 16:58:43 -- event/event.sh@25 -- # waitforlisten 420670 /var/tmp/spdk-nbd.sock 00:06:28.268 16:58:43 -- common/autotest_common.sh@819 -- # '[' -z 420670 ']' 00:06:28.268 16:58:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.268 16:58:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.268 16:58:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.268 16:58:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.268 16:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:28.268 16:58:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.268 16:58:43 -- common/autotest_common.sh@852 -- # return 0 00:06:28.268 16:58:43 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.268 Malloc0 00:06:28.268 16:58:44 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.268 Malloc1 00:06:28.526 16:58:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@12 -- # local i 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.526 /dev/nbd0 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.526 16:58:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.526 16:58:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:28.526 16:58:44 -- common/autotest_common.sh@857 -- # local i 00:06:28.526 16:58:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:28.526 16:58:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:28.526 16:58:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:28.783 16:58:44 -- common/autotest_common.sh@861 -- # break 00:06:28.784 16:58:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:28.784 16:58:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:28.784 16:58:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.784 1+0 records in 00:06:28.784 1+0 records out 00:06:28.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138359 s, 29.6 MB/s 00:06:28.784 16:58:44 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.784 16:58:44 -- common/autotest_common.sh@874 -- # size=4096 00:06:28.784 16:58:44 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:28.784 16:58:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:28.784 16:58:44 -- common/autotest_common.sh@877 -- # return 0 00:06:28.784 16:58:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.784 16:58:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.784 16:58:44 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.784 /dev/nbd1 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.041 16:58:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:29.041 16:58:44 -- common/autotest_common.sh@857 -- # local i 00:06:29.041 16:58:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:29.041 16:58:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:29.041 16:58:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:29.041 16:58:44 -- common/autotest_common.sh@861 -- # break 00:06:29.041 16:58:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:29.041 16:58:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:29.041 16:58:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.041 1+0 records in 00:06:29.041 1+0 records out 00:06:29.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201952 s, 20.3 MB/s 00:06:29.041 16:58:44 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.041 16:58:44 -- common/autotest_common.sh@874 -- # size=4096 00:06:29.041 16:58:44 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.041 16:58:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:29.041 16:58:44 -- common/autotest_common.sh@877 -- # return 0 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.041 16:58:44 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.041 16:58:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.041 { 00:06:29.041 "nbd_device": "/dev/nbd0", 00:06:29.041 "bdev_name": "Malloc0" 00:06:29.041 }, 00:06:29.041 { 00:06:29.041 "nbd_device": "/dev/nbd1", 00:06:29.041 "bdev_name": "Malloc1" 00:06:29.041 } 00:06:29.041 ]' 00:06:29.041 16:58:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.041 { 00:06:29.041 "nbd_device": "/dev/nbd0", 00:06:29.041 "bdev_name": "Malloc0" 00:06:29.041 }, 00:06:29.041 { 00:06:29.041 "nbd_device": "/dev/nbd1", 00:06:29.041 "bdev_name": "Malloc1" 00:06:29.041 } 00:06:29.041 ]' 00:06:29.041 16:58:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.299 /dev/nbd1' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.299 /dev/nbd1' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.299 256+0 records in 00:06:29.299 256+0 records out 00:06:29.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503586 s, 208 MB/s 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.299 256+0 records in 00:06:29.299 256+0 records out 00:06:29.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232384 s, 45.1 MB/s 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.299 256+0 records in 00:06:29.299 256+0 records out 00:06:29.299 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246679 s, 42.5 MB/s 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.299 16:58:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@41 -- # break 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.556 16:58:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@41 -- # break 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.814 16:58:45 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@65 -- # true 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.070 16:58:46 -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.070 16:58:46 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.328 16:58:46 -- event/event.sh@35 -- # sleep 3 00:06:30.585 [2024-07-20 16:58:46.564331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.586 [2024-07-20 16:58:46.651032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.586 [2024-07-20 16:58:46.651038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.586 [2024-07-20 16:58:46.711922] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.586 [2024-07-20 16:58:46.711994] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.859 16:58:49 -- event/event.sh@38 -- # waitforlisten 420670 /var/tmp/spdk-nbd.sock 00:06:33.859 16:58:49 -- common/autotest_common.sh@819 -- # '[' -z 420670 ']' 00:06:33.859 16:58:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.859 16:58:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.859 16:58:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.859 16:58:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.859 16:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:33.859 16:58:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.859 16:58:49 -- common/autotest_common.sh@852 -- # return 0 00:06:33.859 16:58:49 -- event/event.sh@39 -- # killprocess 420670 00:06:33.859 16:58:49 -- common/autotest_common.sh@926 -- # '[' -z 420670 ']' 00:06:33.859 16:58:49 -- common/autotest_common.sh@930 -- # kill -0 420670 00:06:33.859 16:58:49 -- common/autotest_common.sh@931 -- # uname 00:06:33.859 16:58:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:33.860 16:58:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 420670 00:06:33.860 16:58:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:33.860 16:58:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:33.860 16:58:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 420670' 00:06:33.860 killing process with pid 420670 00:06:33.860 16:58:49 -- common/autotest_common.sh@945 -- # kill 420670 00:06:33.860 16:58:49 -- common/autotest_common.sh@950 -- # wait 420670 00:06:33.860 spdk_app_start is called in Round 0. 00:06:33.860 Shutdown signal received, stop current app iteration 00:06:33.860 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:33.860 spdk_app_start is called in Round 1. 00:06:33.860 Shutdown signal received, stop current app iteration 00:06:33.860 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:33.860 spdk_app_start is called in Round 2. 00:06:33.860 Shutdown signal received, stop current app iteration 00:06:33.860 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:33.860 spdk_app_start is called in Round 3. 00:06:33.860 Shutdown signal received, stop current app iteration 00:06:33.860 16:58:49 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:33.860 16:58:49 -- event/event.sh@42 -- # return 0 00:06:33.860 00:06:33.860 real 0m18.199s 00:06:33.860 user 0m39.935s 00:06:33.860 sys 0m3.313s 00:06:33.860 16:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.860 16:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 ************************************ 00:06:33.860 END TEST app_repeat 00:06:33.860 ************************************ 00:06:33.860 16:58:49 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:33.860 16:58:49 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.860 16:58:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.860 16:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.860 16:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 ************************************ 00:06:33.860 START TEST cpu_locks 00:06:33.860 ************************************ 00:06:33.860 16:58:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:33.860 * Looking for test storage... 00:06:33.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:33.860 16:58:49 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:33.860 16:58:49 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:33.860 16:58:49 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:33.860 16:58:49 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:33.860 16:58:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.860 16:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.860 16:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 ************************************ 00:06:33.860 START TEST default_locks 00:06:33.860 ************************************ 00:06:33.860 16:58:49 -- common/autotest_common.sh@1104 -- # default_locks 00:06:33.860 16:58:49 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=423198 00:06:33.860 16:58:49 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.860 16:58:49 -- event/cpu_locks.sh@47 -- # waitforlisten 423198 00:06:33.860 16:58:49 -- common/autotest_common.sh@819 -- # '[' -z 423198 ']' 00:06:33.860 16:58:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.860 16:58:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.860 16:58:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.860 16:58:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.860 16:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:33.860 [2024-07-20 16:58:49.958494] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.860 [2024-07-20 16:58:49.958576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423198 ] 00:06:33.860 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.117 [2024-07-20 16:58:50.019167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.117 [2024-07-20 16:58:50.105747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.117 [2024-07-20 16:58:50.105969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.049 16:58:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.049 16:58:50 -- common/autotest_common.sh@852 -- # return 0 00:06:35.049 16:58:50 -- event/cpu_locks.sh@49 -- # locks_exist 423198 00:06:35.049 16:58:50 -- event/cpu_locks.sh@22 -- # lslocks -p 423198 00:06:35.049 16:58:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.049 lslocks: write error 00:06:35.049 16:58:51 -- event/cpu_locks.sh@50 -- # killprocess 423198 00:06:35.049 16:58:51 -- common/autotest_common.sh@926 -- # '[' -z 423198 ']' 00:06:35.049 16:58:51 -- common/autotest_common.sh@930 -- # kill -0 423198 00:06:35.049 16:58:51 -- common/autotest_common.sh@931 -- # uname 00:06:35.049 16:58:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.049 16:58:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 423198 00:06:35.049 16:58:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.049 16:58:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.049 16:58:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 423198' 00:06:35.049 killing process with pid 423198 00:06:35.049 16:58:51 -- common/autotest_common.sh@945 -- # kill 423198 00:06:35.049 16:58:51 -- common/autotest_common.sh@950 -- # wait 423198 00:06:35.612 16:58:51 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 423198 00:06:35.612 16:58:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.612 16:58:51 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 423198 00:06:35.612 16:58:51 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:35.612 16:58:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.612 16:58:51 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:35.612 16:58:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.612 16:58:51 -- common/autotest_common.sh@643 -- # waitforlisten 423198 00:06:35.612 16:58:51 -- common/autotest_common.sh@819 -- # '[' -z 423198 ']' 00:06:35.612 16:58:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.612 16:58:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.612 16:58:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.612 16:58:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.612 16:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (423198) - No such process 00:06:35.612 ERROR: process (pid: 423198) is no longer running 00:06:35.612 16:58:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.612 16:58:51 -- common/autotest_common.sh@852 -- # return 1 00:06:35.612 16:58:51 -- common/autotest_common.sh@643 -- # es=1 00:06:35.612 16:58:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:35.612 16:58:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:35.612 16:58:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:35.612 16:58:51 -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.612 16:58:51 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.612 16:58:51 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.612 16:58:51 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.612 00:06:35.612 real 0m1.701s 00:06:35.612 user 0m1.843s 00:06:35.612 sys 0m0.547s 00:06:35.612 16:58:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.612 16:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.612 ************************************ 00:06:35.612 END TEST default_locks 00:06:35.612 ************************************ 00:06:35.612 16:58:51 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.612 16:58:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.612 16:58:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.612 16:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.612 ************************************ 00:06:35.612 START TEST default_locks_via_rpc 00:06:35.612 ************************************ 00:06:35.612 16:58:51 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:35.612 16:58:51 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=423375 00:06:35.612 16:58:51 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.612 16:58:51 -- event/cpu_locks.sh@63 -- # waitforlisten 423375 00:06:35.612 16:58:51 -- common/autotest_common.sh@819 -- # '[' -z 423375 ']' 00:06:35.612 16:58:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.612 16:58:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.612 16:58:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.612 16:58:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.612 16:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:35.612 [2024-07-20 16:58:51.691104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:35.612 [2024-07-20 16:58:51.691200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423375 ] 00:06:35.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.612 [2024-07-20 16:58:51.753542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.870 [2024-07-20 16:58:51.841522] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.870 [2024-07-20 16:58:51.841701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.803 16:58:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.803 16:58:52 -- common/autotest_common.sh@852 -- # return 0 00:06:36.803 16:58:52 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.803 16:58:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.803 16:58:52 -- common/autotest_common.sh@10 -- # set +x 00:06:36.803 16:58:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.803 16:58:52 -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.803 16:58:52 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.803 16:58:52 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.803 16:58:52 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.803 16:58:52 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.803 16:58:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.803 16:58:52 -- common/autotest_common.sh@10 -- # set +x 00:06:36.803 16:58:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.803 16:58:52 -- event/cpu_locks.sh@71 -- # locks_exist 423375 00:06:36.803 16:58:52 -- event/cpu_locks.sh@22 -- # lslocks -p 423375 00:06:36.803 16:58:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.060 16:58:52 -- event/cpu_locks.sh@73 -- # killprocess 423375 00:06:37.060 16:58:52 -- common/autotest_common.sh@926 -- # '[' -z 423375 ']' 00:06:37.060 16:58:52 -- common/autotest_common.sh@930 -- # kill -0 423375 00:06:37.060 16:58:52 -- common/autotest_common.sh@931 -- # uname 00:06:37.060 16:58:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.060 16:58:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 423375 00:06:37.060 16:58:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.060 16:58:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.060 16:58:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 423375' 00:06:37.060 killing process with pid 423375 00:06:37.060 16:58:53 -- common/autotest_common.sh@945 -- # kill 423375 00:06:37.060 16:58:53 -- common/autotest_common.sh@950 -- # wait 423375 00:06:37.317 00:06:37.317 real 0m1.763s 00:06:37.317 user 0m1.867s 00:06:37.317 sys 0m0.570s 00:06:37.317 16:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.317 16:58:53 -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 ************************************ 00:06:37.317 END TEST default_locks_via_rpc 00:06:37.317 ************************************ 00:06:37.317 16:58:53 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.317 16:58:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.317 16:58:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.317 16:58:53 -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 ************************************ 00:06:37.317 START TEST non_locking_app_on_locked_coremask 00:06:37.317 ************************************ 00:06:37.317 16:58:53 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:37.317 16:58:53 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=423674 00:06:37.317 16:58:53 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.317 16:58:53 -- event/cpu_locks.sh@81 -- # waitforlisten 423674 /var/tmp/spdk.sock 00:06:37.317 16:58:53 -- common/autotest_common.sh@819 -- # '[' -z 423674 ']' 00:06:37.317 16:58:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.317 16:58:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.317 16:58:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.317 16:58:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.317 16:58:53 -- common/autotest_common.sh@10 -- # set +x 00:06:37.574 [2024-07-20 16:58:53.483990] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.574 [2024-07-20 16:58:53.484084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423674 ] 00:06:37.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.574 [2024-07-20 16:58:53.553074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.574 [2024-07-20 16:58:53.646107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.574 [2024-07-20 16:58:53.646284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.505 16:58:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.505 16:58:54 -- common/autotest_common.sh@852 -- # return 0 00:06:38.505 16:58:54 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=423816 00:06:38.505 16:58:54 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.505 16:58:54 -- event/cpu_locks.sh@85 -- # waitforlisten 423816 /var/tmp/spdk2.sock 00:06:38.505 16:58:54 -- common/autotest_common.sh@819 -- # '[' -z 423816 ']' 00:06:38.505 16:58:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.505 16:58:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.505 16:58:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.505 16:58:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.505 16:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:38.505 [2024-07-20 16:58:54.518240] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.505 [2024-07-20 16:58:54.518317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid423816 ] 00:06:38.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.505 [2024-07-20 16:58:54.613256] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.505 [2024-07-20 16:58:54.613289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.762 [2024-07-20 16:58:54.795000] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.762 [2024-07-20 16:58:54.795193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.327 16:58:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.327 16:58:55 -- common/autotest_common.sh@852 -- # return 0 00:06:39.327 16:58:55 -- event/cpu_locks.sh@87 -- # locks_exist 423674 00:06:39.327 16:58:55 -- event/cpu_locks.sh@22 -- # lslocks -p 423674 00:06:39.327 16:58:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.890 lslocks: write error 00:06:39.890 16:58:56 -- event/cpu_locks.sh@89 -- # killprocess 423674 00:06:39.890 16:58:56 -- common/autotest_common.sh@926 -- # '[' -z 423674 ']' 00:06:39.890 16:58:56 -- common/autotest_common.sh@930 -- # kill -0 423674 00:06:39.890 16:58:56 -- common/autotest_common.sh@931 -- # uname 00:06:39.890 16:58:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.890 16:58:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 423674 00:06:39.890 16:58:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.891 16:58:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.891 16:58:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 423674' 00:06:39.891 killing process with pid 423674 00:06:39.891 16:58:56 -- common/autotest_common.sh@945 -- # kill 423674 00:06:39.891 16:58:56 -- common/autotest_common.sh@950 -- # wait 423674 00:06:40.822 16:58:56 -- event/cpu_locks.sh@90 -- # killprocess 423816 00:06:40.822 16:58:56 -- common/autotest_common.sh@926 -- # '[' -z 423816 ']' 00:06:40.822 16:58:56 -- common/autotest_common.sh@930 -- # kill -0 423816 00:06:40.822 16:58:56 -- common/autotest_common.sh@931 -- # uname 00:06:40.822 16:58:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.822 16:58:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 423816 00:06:40.822 16:58:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.822 16:58:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.822 16:58:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 423816' 00:06:40.822 killing process with pid 423816 00:06:40.822 16:58:56 -- common/autotest_common.sh@945 -- # kill 423816 00:06:40.822 16:58:56 -- common/autotest_common.sh@950 -- # wait 423816 00:06:41.420 00:06:41.420 real 0m3.865s 00:06:41.420 user 0m4.195s 00:06:41.420 sys 0m1.136s 00:06:41.420 16:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.420 16:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 ************************************ 00:06:41.420 END TEST non_locking_app_on_locked_coremask 00:06:41.420 ************************************ 00:06:41.420 16:58:57 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.420 16:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.420 16:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.420 16:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 ************************************ 00:06:41.420 START TEST locking_app_on_unlocked_coremask 00:06:41.420 ************************************ 00:06:41.420 16:58:57 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:41.420 16:58:57 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=424130 00:06:41.420 16:58:57 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.420 16:58:57 -- event/cpu_locks.sh@99 -- # waitforlisten 424130 /var/tmp/spdk.sock 00:06:41.420 16:58:57 -- common/autotest_common.sh@819 -- # '[' -z 424130 ']' 00:06:41.420 16:58:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.420 16:58:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.420 16:58:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.420 16:58:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.420 16:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:41.420 [2024-07-20 16:58:57.367977] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.420 [2024-07-20 16:58:57.368059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424130 ] 00:06:41.420 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.420 [2024-07-20 16:58:57.425407] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.420 [2024-07-20 16:58:57.425445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.420 [2024-07-20 16:58:57.510911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.420 [2024-07-20 16:58:57.511070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.353 16:58:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.353 16:58:58 -- common/autotest_common.sh@852 -- # return 0 00:06:42.353 16:58:58 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=424269 00:06:42.353 16:58:58 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.353 16:58:58 -- event/cpu_locks.sh@103 -- # waitforlisten 424269 /var/tmp/spdk2.sock 00:06:42.353 16:58:58 -- common/autotest_common.sh@819 -- # '[' -z 424269 ']' 00:06:42.353 16:58:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.353 16:58:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.354 16:58:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.354 16:58:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.354 16:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:42.354 [2024-07-20 16:58:58.343560] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.354 [2024-07-20 16:58:58.343638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424269 ] 00:06:42.354 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.354 [2024-07-20 16:58:58.436647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.611 [2024-07-20 16:58:58.613503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.611 [2024-07-20 16:58:58.613691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.177 16:58:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.177 16:58:59 -- common/autotest_common.sh@852 -- # return 0 00:06:43.177 16:58:59 -- event/cpu_locks.sh@105 -- # locks_exist 424269 00:06:43.177 16:58:59 -- event/cpu_locks.sh@22 -- # lslocks -p 424269 00:06:43.177 16:58:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.745 lslocks: write error 00:06:43.745 16:58:59 -- event/cpu_locks.sh@107 -- # killprocess 424130 00:06:43.745 16:58:59 -- common/autotest_common.sh@926 -- # '[' -z 424130 ']' 00:06:43.745 16:58:59 -- common/autotest_common.sh@930 -- # kill -0 424130 00:06:43.745 16:58:59 -- common/autotest_common.sh@931 -- # uname 00:06:43.745 16:58:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:43.745 16:58:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 424130 00:06:43.745 16:58:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:43.745 16:58:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:43.745 16:58:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 424130' 00:06:43.745 killing process with pid 424130 00:06:43.745 16:58:59 -- common/autotest_common.sh@945 -- # kill 424130 00:06:43.745 16:58:59 -- common/autotest_common.sh@950 -- # wait 424130 00:06:44.680 16:59:00 -- event/cpu_locks.sh@108 -- # killprocess 424269 00:06:44.680 16:59:00 -- common/autotest_common.sh@926 -- # '[' -z 424269 ']' 00:06:44.680 16:59:00 -- common/autotest_common.sh@930 -- # kill -0 424269 00:06:44.680 16:59:00 -- common/autotest_common.sh@931 -- # uname 00:06:44.680 16:59:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.680 16:59:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 424269 00:06:44.680 16:59:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.680 16:59:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.680 16:59:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 424269' 00:06:44.680 killing process with pid 424269 00:06:44.680 16:59:00 -- common/autotest_common.sh@945 -- # kill 424269 00:06:44.680 16:59:00 -- common/autotest_common.sh@950 -- # wait 424269 00:06:44.938 00:06:44.938 real 0m3.750s 00:06:44.938 user 0m4.045s 00:06:44.938 sys 0m1.080s 00:06:44.938 16:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.938 16:59:01 -- common/autotest_common.sh@10 -- # set +x 00:06:44.938 ************************************ 00:06:44.938 END TEST locking_app_on_unlocked_coremask 00:06:44.938 ************************************ 00:06:44.938 16:59:01 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.938 16:59:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.938 16:59:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.938 16:59:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.197 ************************************ 00:06:45.197 START TEST locking_app_on_locked_coremask 00:06:45.197 ************************************ 00:06:45.197 16:59:01 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:45.197 16:59:01 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=424605 00:06:45.197 16:59:01 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.197 16:59:01 -- event/cpu_locks.sh@116 -- # waitforlisten 424605 /var/tmp/spdk.sock 00:06:45.197 16:59:01 -- common/autotest_common.sh@819 -- # '[' -z 424605 ']' 00:06:45.197 16:59:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.197 16:59:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.197 16:59:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.197 16:59:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.197 16:59:01 -- common/autotest_common.sh@10 -- # set +x 00:06:45.197 [2024-07-20 16:59:01.148119] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:45.197 [2024-07-20 16:59:01.148203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424605 ] 00:06:45.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.197 [2024-07-20 16:59:01.206898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.197 [2024-07-20 16:59:01.293515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.197 [2024-07-20 16:59:01.293681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.131 16:59:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.131 16:59:02 -- common/autotest_common.sh@852 -- # return 0 00:06:46.131 16:59:02 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=424724 00:06:46.131 16:59:02 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.131 16:59:02 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 424724 /var/tmp/spdk2.sock 00:06:46.131 16:59:02 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.131 16:59:02 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 424724 /var/tmp/spdk2.sock 00:06:46.131 16:59:02 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:46.131 16:59:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.131 16:59:02 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:46.131 16:59:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.131 16:59:02 -- common/autotest_common.sh@643 -- # waitforlisten 424724 /var/tmp/spdk2.sock 00:06:46.131 16:59:02 -- common/autotest_common.sh@819 -- # '[' -z 424724 ']' 00:06:46.131 16:59:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.131 16:59:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.131 16:59:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.131 16:59:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.131 16:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:46.131 [2024-07-20 16:59:02.124653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.131 [2024-07-20 16:59:02.124727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid424724 ] 00:06:46.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.131 [2024-07-20 16:59:02.217619] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 424605 has claimed it. 00:06:46.131 [2024-07-20 16:59:02.217676] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (424724) - No such process 00:06:46.697 ERROR: process (pid: 424724) is no longer running 00:06:46.697 16:59:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.697 16:59:02 -- common/autotest_common.sh@852 -- # return 1 00:06:46.697 16:59:02 -- common/autotest_common.sh@643 -- # es=1 00:06:46.697 16:59:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.697 16:59:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:46.697 16:59:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.697 16:59:02 -- event/cpu_locks.sh@122 -- # locks_exist 424605 00:06:46.697 16:59:02 -- event/cpu_locks.sh@22 -- # lslocks -p 424605 00:06:46.697 16:59:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.285 lslocks: write error 00:06:47.285 16:59:03 -- event/cpu_locks.sh@124 -- # killprocess 424605 00:06:47.285 16:59:03 -- common/autotest_common.sh@926 -- # '[' -z 424605 ']' 00:06:47.285 16:59:03 -- common/autotest_common.sh@930 -- # kill -0 424605 00:06:47.285 16:59:03 -- common/autotest_common.sh@931 -- # uname 00:06:47.285 16:59:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.285 16:59:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 424605 00:06:47.285 16:59:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.285 16:59:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.285 16:59:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 424605' 00:06:47.285 killing process with pid 424605 00:06:47.285 16:59:03 -- common/autotest_common.sh@945 -- # kill 424605 00:06:47.285 16:59:03 -- common/autotest_common.sh@950 -- # wait 424605 00:06:47.543 00:06:47.543 real 0m2.507s 00:06:47.543 user 0m2.864s 00:06:47.543 sys 0m0.663s 00:06:47.543 16:59:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.543 16:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:47.543 ************************************ 00:06:47.543 END TEST locking_app_on_locked_coremask 00:06:47.543 ************************************ 00:06:47.543 16:59:03 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.543 16:59:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.543 16:59:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.543 16:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:47.543 ************************************ 00:06:47.543 START TEST locking_overlapped_coremask 00:06:47.543 ************************************ 00:06:47.543 16:59:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:47.543 16:59:03 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=425016 00:06:47.543 16:59:03 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.544 16:59:03 -- event/cpu_locks.sh@133 -- # waitforlisten 425016 /var/tmp/spdk.sock 00:06:47.544 16:59:03 -- common/autotest_common.sh@819 -- # '[' -z 425016 ']' 00:06:47.544 16:59:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.544 16:59:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.544 16:59:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.544 16:59:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.544 16:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:47.544 [2024-07-20 16:59:03.684545] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:47.544 [2024-07-20 16:59:03.684642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425016 ] 00:06:47.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.801 [2024-07-20 16:59:03.747436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.801 [2024-07-20 16:59:03.833908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.801 [2024-07-20 16:59:03.834155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.801 [2024-07-20 16:59:03.834232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.801 [2024-07-20 16:59:03.834214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.732 16:59:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.732 16:59:04 -- common/autotest_common.sh@852 -- # return 0 00:06:48.732 16:59:04 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=425158 00:06:48.732 16:59:04 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.732 16:59:04 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 425158 /var/tmp/spdk2.sock 00:06:48.732 16:59:04 -- common/autotest_common.sh@640 -- # local es=0 00:06:48.732 16:59:04 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 425158 /var/tmp/spdk2.sock 00:06:48.732 16:59:04 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:48.732 16:59:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.732 16:59:04 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:48.733 16:59:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.733 16:59:04 -- common/autotest_common.sh@643 -- # waitforlisten 425158 /var/tmp/spdk2.sock 00:06:48.733 16:59:04 -- common/autotest_common.sh@819 -- # '[' -z 425158 ']' 00:06:48.733 16:59:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.733 16:59:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.733 16:59:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.733 16:59:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.733 16:59:04 -- common/autotest_common.sh@10 -- # set +x 00:06:48.733 [2024-07-20 16:59:04.657999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.733 [2024-07-20 16:59:04.658104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425158 ] 00:06:48.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.733 [2024-07-20 16:59:04.745221] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 425016 has claimed it. 00:06:48.733 [2024-07-20 16:59:04.745287] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (425158) - No such process 00:06:49.297 ERROR: process (pid: 425158) is no longer running 00:06:49.297 16:59:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.297 16:59:05 -- common/autotest_common.sh@852 -- # return 1 00:06:49.297 16:59:05 -- common/autotest_common.sh@643 -- # es=1 00:06:49.297 16:59:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:49.297 16:59:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:49.297 16:59:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:49.297 16:59:05 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.297 16:59:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.297 16:59:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.297 16:59:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.297 16:59:05 -- event/cpu_locks.sh@141 -- # killprocess 425016 00:06:49.297 16:59:05 -- common/autotest_common.sh@926 -- # '[' -z 425016 ']' 00:06:49.297 16:59:05 -- common/autotest_common.sh@930 -- # kill -0 425016 00:06:49.297 16:59:05 -- common/autotest_common.sh@931 -- # uname 00:06:49.297 16:59:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.297 16:59:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 425016 00:06:49.297 16:59:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.297 16:59:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.297 16:59:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 425016' 00:06:49.297 killing process with pid 425016 00:06:49.297 16:59:05 -- common/autotest_common.sh@945 -- # kill 425016 00:06:49.297 16:59:05 -- common/autotest_common.sh@950 -- # wait 425016 00:06:49.862 00:06:49.862 real 0m2.110s 00:06:49.862 user 0m6.024s 00:06:49.862 sys 0m0.473s 00:06:49.862 16:59:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.862 16:59:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.862 ************************************ 00:06:49.862 END TEST locking_overlapped_coremask 00:06:49.862 ************************************ 00:06:49.862 16:59:05 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.862 16:59:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.862 16:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.862 16:59:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.862 ************************************ 00:06:49.862 START TEST locking_overlapped_coremask_via_rpc 00:06:49.862 ************************************ 00:06:49.862 16:59:05 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:49.862 16:59:05 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=425324 00:06:49.862 16:59:05 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.862 16:59:05 -- event/cpu_locks.sh@149 -- # waitforlisten 425324 /var/tmp/spdk.sock 00:06:49.862 16:59:05 -- common/autotest_common.sh@819 -- # '[' -z 425324 ']' 00:06:49.862 16:59:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.862 16:59:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.862 16:59:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.862 16:59:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.862 16:59:05 -- common/autotest_common.sh@10 -- # set +x 00:06:49.862 [2024-07-20 16:59:05.823954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:49.862 [2024-07-20 16:59:05.824035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425324 ] 00:06:49.862 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.862 [2024-07-20 16:59:05.886964] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.862 [2024-07-20 16:59:05.887014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.862 [2024-07-20 16:59:05.973382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.862 [2024-07-20 16:59:05.973634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.862 [2024-07-20 16:59:05.973692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.862 [2024-07-20 16:59:05.973695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.795 16:59:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.795 16:59:06 -- common/autotest_common.sh@852 -- # return 0 00:06:50.795 16:59:06 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=425396 00:06:50.795 16:59:06 -- event/cpu_locks.sh@153 -- # waitforlisten 425396 /var/tmp/spdk2.sock 00:06:50.795 16:59:06 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.795 16:59:06 -- common/autotest_common.sh@819 -- # '[' -z 425396 ']' 00:06:50.795 16:59:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.795 16:59:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.795 16:59:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.795 16:59:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.795 16:59:06 -- common/autotest_common.sh@10 -- # set +x 00:06:50.795 [2024-07-20 16:59:06.796293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.795 [2024-07-20 16:59:06.796392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425396 ] 00:06:50.795 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.795 [2024-07-20 16:59:06.885190] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.795 [2024-07-20 16:59:06.885230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.053 [2024-07-20 16:59:07.059060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.053 [2024-07-20 16:59:07.059272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.053 [2024-07-20 16:59:07.059342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.053 [2024-07-20 16:59:07.059344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.618 16:59:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.618 16:59:07 -- common/autotest_common.sh@852 -- # return 0 00:06:51.618 16:59:07 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:51.618 16:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.618 16:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.618 16:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.618 16:59:07 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.618 16:59:07 -- common/autotest_common.sh@640 -- # local es=0 00:06:51.618 16:59:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.618 16:59:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:51.618 16:59:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.618 16:59:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:51.618 16:59:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.618 16:59:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:51.618 16:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.618 16:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.618 [2024-07-20 16:59:07.723907] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 425324 has claimed it. 00:06:51.618 request: 00:06:51.618 { 00:06:51.618 "method": "framework_enable_cpumask_locks", 00:06:51.618 "req_id": 1 00:06:51.618 } 00:06:51.618 Got JSON-RPC error response 00:06:51.618 response: 00:06:51.618 { 00:06:51.618 "code": -32603, 00:06:51.618 "message": "Failed to claim CPU core: 2" 00:06:51.618 } 00:06:51.618 16:59:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:51.618 16:59:07 -- common/autotest_common.sh@643 -- # es=1 00:06:51.618 16:59:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:51.618 16:59:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:51.618 16:59:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:51.618 16:59:07 -- event/cpu_locks.sh@158 -- # waitforlisten 425324 /var/tmp/spdk.sock 00:06:51.618 16:59:07 -- common/autotest_common.sh@819 -- # '[' -z 425324 ']' 00:06:51.618 16:59:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.618 16:59:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.618 16:59:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.618 16:59:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.618 16:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:51.876 16:59:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.876 16:59:07 -- common/autotest_common.sh@852 -- # return 0 00:06:51.876 16:59:07 -- event/cpu_locks.sh@159 -- # waitforlisten 425396 /var/tmp/spdk2.sock 00:06:51.876 16:59:07 -- common/autotest_common.sh@819 -- # '[' -z 425396 ']' 00:06:51.876 16:59:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.876 16:59:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.876 16:59:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.876 16:59:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.876 16:59:07 -- common/autotest_common.sh@10 -- # set +x 00:06:52.133 16:59:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.134 16:59:08 -- common/autotest_common.sh@852 -- # return 0 00:06:52.134 16:59:08 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.134 16:59:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.134 16:59:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.134 16:59:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.134 00:06:52.134 real 0m2.457s 00:06:52.134 user 0m1.160s 00:06:52.134 sys 0m0.229s 00:06:52.134 16:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.134 16:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:52.134 ************************************ 00:06:52.134 END TEST locking_overlapped_coremask_via_rpc 00:06:52.134 ************************************ 00:06:52.134 16:59:08 -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.134 16:59:08 -- event/cpu_locks.sh@15 -- # [[ -z 425324 ]] 00:06:52.134 16:59:08 -- event/cpu_locks.sh@15 -- # killprocess 425324 00:06:52.134 16:59:08 -- common/autotest_common.sh@926 -- # '[' -z 425324 ']' 00:06:52.134 16:59:08 -- common/autotest_common.sh@930 -- # kill -0 425324 00:06:52.134 16:59:08 -- common/autotest_common.sh@931 -- # uname 00:06:52.134 16:59:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.134 16:59:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 425324 00:06:52.134 16:59:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.134 16:59:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.134 16:59:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 425324' 00:06:52.134 killing process with pid 425324 00:06:52.134 16:59:08 -- common/autotest_common.sh@945 -- # kill 425324 00:06:52.134 16:59:08 -- common/autotest_common.sh@950 -- # wait 425324 00:06:52.699 16:59:08 -- event/cpu_locks.sh@16 -- # [[ -z 425396 ]] 00:06:52.699 16:59:08 -- event/cpu_locks.sh@16 -- # killprocess 425396 00:06:52.699 16:59:08 -- common/autotest_common.sh@926 -- # '[' -z 425396 ']' 00:06:52.699 16:59:08 -- common/autotest_common.sh@930 -- # kill -0 425396 00:06:52.699 16:59:08 -- common/autotest_common.sh@931 -- # uname 00:06:52.699 16:59:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.699 16:59:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 425396 00:06:52.699 16:59:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:52.699 16:59:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:52.699 16:59:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 425396' 00:06:52.699 killing process with pid 425396 00:06:52.699 16:59:08 -- common/autotest_common.sh@945 -- # kill 425396 00:06:52.699 16:59:08 -- common/autotest_common.sh@950 -- # wait 425396 00:06:52.957 16:59:09 -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.957 16:59:09 -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.957 16:59:09 -- event/cpu_locks.sh@15 -- # [[ -z 425324 ]] 00:06:52.957 16:59:09 -- event/cpu_locks.sh@15 -- # killprocess 425324 00:06:52.957 16:59:09 -- common/autotest_common.sh@926 -- # '[' -z 425324 ']' 00:06:52.957 16:59:09 -- common/autotest_common.sh@930 -- # kill -0 425324 00:06:52.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (425324) - No such process 00:06:52.957 16:59:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 425324 is not found' 00:06:52.957 Process with pid 425324 is not found 00:06:52.957 16:59:09 -- event/cpu_locks.sh@16 -- # [[ -z 425396 ]] 00:06:52.957 16:59:09 -- event/cpu_locks.sh@16 -- # killprocess 425396 00:06:52.957 16:59:09 -- common/autotest_common.sh@926 -- # '[' -z 425396 ']' 00:06:52.957 16:59:09 -- common/autotest_common.sh@930 -- # kill -0 425396 00:06:52.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (425396) - No such process 00:06:52.957 16:59:09 -- common/autotest_common.sh@953 -- # echo 'Process with pid 425396 is not found' 00:06:52.957 Process with pid 425396 is not found 00:06:52.957 16:59:09 -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.957 00:06:52.957 real 0m19.253s 00:06:52.957 user 0m34.107s 00:06:52.957 sys 0m5.503s 00:06:52.957 16:59:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.957 16:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:52.957 ************************************ 00:06:52.957 END TEST cpu_locks 00:06:52.957 ************************************ 00:06:53.215 00:06:53.216 real 0m44.684s 00:06:53.216 user 1m25.248s 00:06:53.216 sys 0m9.534s 00:06:53.216 16:59:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.216 16:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.216 ************************************ 00:06:53.216 END TEST event 00:06:53.216 ************************************ 00:06:53.216 16:59:09 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.216 16:59:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.216 16:59:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.216 16:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.216 ************************************ 00:06:53.216 START TEST thread 00:06:53.216 ************************************ 00:06:53.216 16:59:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.216 * Looking for test storage... 00:06:53.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.216 16:59:09 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.216 16:59:09 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:53.216 16:59:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.216 16:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:53.216 ************************************ 00:06:53.216 START TEST thread_poller_perf 00:06:53.216 ************************************ 00:06:53.216 16:59:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.216 [2024-07-20 16:59:09.218256] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:53.216 [2024-07-20 16:59:09.218336] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425838 ] 00:06:53.216 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.216 [2024-07-20 16:59:09.277813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.216 [2024-07-20 16:59:09.364801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.216 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.595 ====================================== 00:06:54.595 busy:2707824518 (cyc) 00:06:54.595 total_run_count: 284000 00:06:54.595 tsc_hz: 2700000000 (cyc) 00:06:54.595 ====================================== 00:06:54.595 poller_cost: 9534 (cyc), 3531 (nsec) 00:06:54.595 00:06:54.595 real 0m1.251s 00:06:54.595 user 0m1.167s 00:06:54.595 sys 0m0.078s 00:06:54.595 16:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.595 16:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.595 ************************************ 00:06:54.595 END TEST thread_poller_perf 00:06:54.595 ************************************ 00:06:54.595 16:59:10 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.595 16:59:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:54.595 16:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.595 16:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:54.595 ************************************ 00:06:54.595 START TEST thread_poller_perf 00:06:54.595 ************************************ 00:06:54.595 16:59:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.595 [2024-07-20 16:59:10.496487] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.595 [2024-07-20 16:59:10.496572] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425993 ] 00:06:54.595 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.595 [2024-07-20 16:59:10.560603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.595 [2024-07-20 16:59:10.648047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.595 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:55.970 ====================================== 00:06:55.970 busy:2703736796 (cyc) 00:06:55.970 total_run_count: 3836000 00:06:55.970 tsc_hz: 2700000000 (cyc) 00:06:55.970 ====================================== 00:06:55.970 poller_cost: 704 (cyc), 260 (nsec) 00:06:55.970 00:06:55.970 real 0m1.251s 00:06:55.970 user 0m1.161s 00:06:55.970 sys 0m0.083s 00:06:55.970 16:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.970 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.970 ************************************ 00:06:55.970 END TEST thread_poller_perf 00:06:55.970 ************************************ 00:06:55.970 16:59:11 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:55.970 00:06:55.970 real 0m2.603s 00:06:55.970 user 0m2.367s 00:06:55.970 sys 0m0.237s 00:06:55.970 16:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.970 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.970 ************************************ 00:06:55.970 END TEST thread 00:06:55.970 ************************************ 00:06:55.970 16:59:11 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.970 16:59:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.970 16:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.970 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.970 ************************************ 00:06:55.970 START TEST accel 00:06:55.970 ************************************ 00:06:55.970 16:59:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:55.970 * Looking for test storage... 00:06:55.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:55.970 16:59:11 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:55.970 16:59:11 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:55.970 16:59:11 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:55.970 16:59:11 -- accel/accel.sh@59 -- # spdk_tgt_pid=426187 00:06:55.970 16:59:11 -- accel/accel.sh@60 -- # waitforlisten 426187 00:06:55.970 16:59:11 -- common/autotest_common.sh@819 -- # '[' -z 426187 ']' 00:06:55.970 16:59:11 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:55.970 16:59:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.970 16:59:11 -- accel/accel.sh@58 -- # build_accel_config 00:06:55.970 16:59:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:55.970 16:59:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.970 16:59:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.970 16:59:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.970 16:59:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:55.970 16:59:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.970 16:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:55.970 16:59:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.970 16:59:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.970 16:59:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.970 16:59:11 -- accel/accel.sh@42 -- # jq -r . 00:06:55.970 [2024-07-20 16:59:11.877771] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.970 [2024-07-20 16:59:11.877892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426187 ] 00:06:55.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.970 [2024-07-20 16:59:11.939179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.970 [2024-07-20 16:59:12.027854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.970 [2024-07-20 16:59:12.028017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.914 16:59:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:56.914 16:59:12 -- common/autotest_common.sh@852 -- # return 0 00:06:56.914 16:59:12 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:56.914 16:59:12 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:56.914 16:59:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:56.914 16:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:56.914 16:59:12 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:56.914 16:59:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:56.914 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.914 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.914 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.914 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.914 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.914 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.914 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.914 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.914 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.914 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.914 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.914 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # IFS== 00:06:56.915 16:59:12 -- accel/accel.sh@64 -- # read -r opc module 00:06:56.915 16:59:12 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:56.915 16:59:12 -- accel/accel.sh@67 -- # killprocess 426187 00:06:56.915 16:59:12 -- common/autotest_common.sh@926 -- # '[' -z 426187 ']' 00:06:56.915 16:59:12 -- common/autotest_common.sh@930 -- # kill -0 426187 00:06:56.916 16:59:12 -- common/autotest_common.sh@931 -- # uname 00:06:56.916 16:59:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.916 16:59:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 426187 00:06:56.916 16:59:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:56.916 16:59:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:56.916 16:59:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 426187' 00:06:56.916 killing process with pid 426187 00:06:56.916 16:59:12 -- common/autotest_common.sh@945 -- # kill 426187 00:06:56.916 16:59:12 -- common/autotest_common.sh@950 -- # wait 426187 00:06:57.183 16:59:13 -- accel/accel.sh@68 -- # trap - ERR 00:06:57.183 16:59:13 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:57.183 16:59:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:57.183 16:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.183 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.183 16:59:13 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:57.183 16:59:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.183 16:59:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.183 16:59:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.183 16:59:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.183 16:59:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.183 16:59:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.183 16:59:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.183 16:59:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.183 16:59:13 -- accel/accel.sh@42 -- # jq -r . 00:06:57.183 16:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.183 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.183 16:59:13 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.183 16:59:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.183 16:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.183 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.183 ************************************ 00:06:57.183 START TEST accel_missing_filename 00:06:57.183 ************************************ 00:06:57.183 16:59:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:57.183 16:59:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.183 16:59:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:57.183 16:59:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.183 16:59:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.183 16:59:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.183 16:59:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.183 16:59:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:57.183 16:59:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:57.183 16:59:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.183 16:59:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.183 16:59:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.183 16:59:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.183 16:59:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.183 16:59:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.183 16:59:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.183 16:59:13 -- accel/accel.sh@42 -- # jq -r . 00:06:57.183 [2024-07-20 16:59:13.339343] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.183 [2024-07-20 16:59:13.339419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426368 ] 00:06:57.457 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.457 [2024-07-20 16:59:13.400978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.457 [2024-07-20 16:59:13.490343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.457 [2024-07-20 16:59:13.548762] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.714 [2024-07-20 16:59:13.631042] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:57.714 A filename is required. 00:06:57.714 16:59:13 -- common/autotest_common.sh@643 -- # es=234 00:06:57.714 16:59:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:57.714 16:59:13 -- common/autotest_common.sh@652 -- # es=106 00:06:57.714 16:59:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:57.714 16:59:13 -- common/autotest_common.sh@660 -- # es=1 00:06:57.714 16:59:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:57.714 00:06:57.714 real 0m0.390s 00:06:57.714 user 0m0.283s 00:06:57.714 sys 0m0.138s 00:06:57.714 16:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.714 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 ************************************ 00:06:57.714 END TEST accel_missing_filename 00:06:57.714 ************************************ 00:06:57.714 16:59:13 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.714 16:59:13 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:57.714 16:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.714 16:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:57.714 ************************************ 00:06:57.714 START TEST accel_compress_verify 00:06:57.714 ************************************ 00:06:57.714 16:59:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.714 16:59:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.714 16:59:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.714 16:59:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.714 16:59:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.714 16:59:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.714 16:59:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.714 16:59:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.714 16:59:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:57.714 16:59:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.714 16:59:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.714 16:59:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.714 16:59:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.714 16:59:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.714 16:59:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.714 16:59:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.714 16:59:13 -- accel/accel.sh@42 -- # jq -r . 00:06:57.714 [2024-07-20 16:59:13.754284] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.714 [2024-07-20 16:59:13.754370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426511 ] 00:06:57.714 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.714 [2024-07-20 16:59:13.814972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.971 [2024-07-20 16:59:13.905736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.971 [2024-07-20 16:59:13.967018] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.971 [2024-07-20 16:59:14.055550] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:58.229 00:06:58.229 Compression does not support the verify option, aborting. 00:06:58.229 16:59:14 -- common/autotest_common.sh@643 -- # es=161 00:06:58.229 16:59:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.229 16:59:14 -- common/autotest_common.sh@652 -- # es=33 00:06:58.229 16:59:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.229 16:59:14 -- common/autotest_common.sh@660 -- # es=1 00:06:58.229 16:59:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.229 00:06:58.229 real 0m0.402s 00:06:58.229 user 0m0.295s 00:06:58.229 sys 0m0.141s 00:06:58.229 16:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.229 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.229 ************************************ 00:06:58.229 END TEST accel_compress_verify 00:06:58.229 ************************************ 00:06:58.229 16:59:14 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.229 16:59:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.229 16:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.229 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.229 ************************************ 00:06:58.229 START TEST accel_wrong_workload 00:06:58.229 ************************************ 00:06:58.229 16:59:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:58.229 16:59:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.229 16:59:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.229 16:59:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.229 16:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.229 16:59:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.229 16:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.229 16:59:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:58.229 16:59:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.230 16:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.230 16:59:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.230 16:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.230 16:59:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.230 16:59:14 -- accel/accel.sh@42 -- # jq -r . 00:06:58.230 Unsupported workload type: foobar 00:06:58.230 [2024-07-20 16:59:14.185480] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.230 accel_perf options: 00:06:58.230 [-h help message] 00:06:58.230 [-q queue depth per core] 00:06:58.230 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.230 [-T number of threads per core 00:06:58.230 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.230 [-t time in seconds] 00:06:58.230 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.230 [ dif_verify, , dif_generate, dif_generate_copy 00:06:58.230 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.230 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.230 [-S for crc32c workload, use this seed value (default 0) 00:06:58.230 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.230 [-f for fill workload, use this BYTE value (default 255) 00:06:58.230 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.230 [-y verify result if this switch is on] 00:06:58.230 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.230 Can be used to spread operations across a wider range of memory. 00:06:58.230 16:59:14 -- common/autotest_common.sh@643 -- # es=1 00:06:58.230 16:59:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.230 16:59:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:58.230 16:59:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.230 00:06:58.230 real 0m0.025s 00:06:58.230 user 0m0.012s 00:06:58.230 sys 0m0.012s 00:06:58.230 16:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.230 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 ************************************ 00:06:58.230 END TEST accel_wrong_workload 00:06:58.230 ************************************ 00:06:58.230 Error: writing output failed: Broken pipe 00:06:58.230 16:59:14 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.230 16:59:14 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:58.230 16:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.230 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 ************************************ 00:06:58.230 START TEST accel_negative_buffers 00:06:58.230 ************************************ 00:06:58.230 16:59:14 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.230 16:59:14 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.230 16:59:14 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:58.230 16:59:14 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.230 16:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.230 16:59:14 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.230 16:59:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.230 16:59:14 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:58.230 16:59:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:58.230 16:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.230 16:59:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.230 16:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.230 16:59:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.230 16:59:14 -- accel/accel.sh@42 -- # jq -r . 00:06:58.230 -x option must be non-negative. 00:06:58.230 [2024-07-20 16:59:14.229440] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:58.230 accel_perf options: 00:06:58.230 [-h help message] 00:06:58.230 [-q queue depth per core] 00:06:58.230 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.230 [-T number of threads per core 00:06:58.230 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.230 [-t time in seconds] 00:06:58.230 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.230 [ dif_verify, , dif_generate, dif_generate_copy 00:06:58.230 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.230 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.230 [-S for crc32c workload, use this seed value (default 0) 00:06:58.230 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.230 [-f for fill workload, use this BYTE value (default 255) 00:06:58.230 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.230 [-y verify result if this switch is on] 00:06:58.230 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.230 Can be used to spread operations across a wider range of memory. 00:06:58.230 16:59:14 -- common/autotest_common.sh@643 -- # es=1 00:06:58.230 16:59:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.230 16:59:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:58.230 16:59:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.230 00:06:58.230 real 0m0.023s 00:06:58.230 user 0m0.017s 00:06:58.230 sys 0m0.006s 00:06:58.230 16:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.230 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 ************************************ 00:06:58.230 END TEST accel_negative_buffers 00:06:58.230 ************************************ 00:06:58.230 Error: writing output failed: Broken pipe 00:06:58.230 16:59:14 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:58.230 16:59:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:58.230 16:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.230 16:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 ************************************ 00:06:58.230 START TEST accel_crc32c 00:06:58.230 ************************************ 00:06:58.230 16:59:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:58.230 16:59:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.230 16:59:14 -- accel/accel.sh@17 -- # local accel_module 00:06:58.230 16:59:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:58.230 16:59:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:58.230 16:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.230 16:59:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.230 16:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.230 16:59:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.230 16:59:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.230 16:59:14 -- accel/accel.sh@42 -- # jq -r . 00:06:58.230 [2024-07-20 16:59:14.275073] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.230 [2024-07-20 16:59:14.275137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426570 ] 00:06:58.230 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.230 [2024-07-20 16:59:14.338841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.488 [2024-07-20 16:59:14.427131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.858 16:59:15 -- accel/accel.sh@18 -- # out=' 00:06:59.858 SPDK Configuration: 00:06:59.858 Core mask: 0x1 00:06:59.858 00:06:59.858 Accel Perf Configuration: 00:06:59.858 Workload Type: crc32c 00:06:59.858 CRC-32C seed: 32 00:06:59.858 Transfer size: 4096 bytes 00:06:59.858 Vector count 1 00:06:59.858 Module: software 00:06:59.858 Queue depth: 32 00:06:59.858 Allocate depth: 32 00:06:59.858 # threads/core: 1 00:06:59.858 Run time: 1 seconds 00:06:59.858 Verify: Yes 00:06:59.858 00:06:59.858 Running for 1 seconds... 00:06:59.858 00:06:59.858 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.858 ------------------------------------------------------------------------------------ 00:06:59.858 0,0 405504/s 1584 MiB/s 0 0 00:06:59.858 ==================================================================================== 00:06:59.858 Total 405504/s 1584 MiB/s 0 0' 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.858 16:59:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.858 16:59:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.858 16:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.858 16:59:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.858 16:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.858 16:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.858 16:59:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.858 16:59:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.858 16:59:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.858 16:59:15 -- accel/accel.sh@42 -- # jq -r . 00:06:59.858 [2024-07-20 16:59:15.668673] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:59.858 [2024-07-20 16:59:15.668750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426715 ] 00:06:59.858 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.858 [2024-07-20 16:59:15.729629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.858 [2024-07-20 16:59:15.824501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.858 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.858 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.858 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.858 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.858 16:59:15 -- accel/accel.sh@21 -- # val=0x1 00:06:59.858 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.858 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.858 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.858 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.858 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.858 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.858 16:59:15 -- accel/accel.sh@21 -- # val=crc32c 00:06:59.858 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.858 16:59:15 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val=32 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val=software 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val=32 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val=32 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val=1 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val=Yes 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:06:59.859 16:59:15 -- accel/accel.sh@21 -- # val= 00:06:59.859 16:59:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # IFS=: 00:06:59.859 16:59:15 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@21 -- # val= 00:07:01.230 16:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@21 -- # val= 00:07:01.230 16:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@21 -- # val= 00:07:01.230 16:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@21 -- # val= 00:07:01.230 16:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@21 -- # val= 00:07:01.230 16:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@21 -- # val= 00:07:01.230 16:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:01.230 16:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:01.230 16:59:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.231 16:59:17 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:01.231 16:59:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.231 00:07:01.231 real 0m2.787s 00:07:01.231 user 0m2.492s 00:07:01.231 sys 0m0.288s 00:07:01.231 16:59:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.231 16:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.231 ************************************ 00:07:01.231 END TEST accel_crc32c 00:07:01.231 ************************************ 00:07:01.231 16:59:17 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:01.231 16:59:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:01.231 16:59:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.231 16:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.231 ************************************ 00:07:01.231 START TEST accel_crc32c_C2 00:07:01.231 ************************************ 00:07:01.231 16:59:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:01.231 16:59:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.231 16:59:17 -- accel/accel.sh@17 -- # local accel_module 00:07:01.231 16:59:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:01.231 16:59:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:01.231 16:59:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.231 16:59:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.231 16:59:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.231 16:59:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.231 16:59:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.231 16:59:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.231 16:59:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.231 16:59:17 -- accel/accel.sh@42 -- # jq -r . 00:07:01.231 [2024-07-20 16:59:17.087106] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:01.231 [2024-07-20 16:59:17.087214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid426991 ] 00:07:01.231 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.231 [2024-07-20 16:59:17.150246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.231 [2024-07-20 16:59:17.238538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.604 16:59:18 -- accel/accel.sh@18 -- # out=' 00:07:02.604 SPDK Configuration: 00:07:02.604 Core mask: 0x1 00:07:02.604 00:07:02.604 Accel Perf Configuration: 00:07:02.604 Workload Type: crc32c 00:07:02.604 CRC-32C seed: 0 00:07:02.604 Transfer size: 4096 bytes 00:07:02.604 Vector count 2 00:07:02.604 Module: software 00:07:02.604 Queue depth: 32 00:07:02.604 Allocate depth: 32 00:07:02.604 # threads/core: 1 00:07:02.604 Run time: 1 seconds 00:07:02.604 Verify: Yes 00:07:02.604 00:07:02.604 Running for 1 seconds... 00:07:02.604 00:07:02.604 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.604 ------------------------------------------------------------------------------------ 00:07:02.604 0,0 315808/s 2467 MiB/s 0 0 00:07:02.604 ==================================================================================== 00:07:02.604 Total 315808/s 1233 MiB/s 0 0' 00:07:02.604 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.604 16:59:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:02.604 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.604 16:59:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:02.604 16:59:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.604 16:59:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.604 16:59:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.604 16:59:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.604 16:59:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.604 16:59:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.605 16:59:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.605 16:59:18 -- accel/accel.sh@42 -- # jq -r . 00:07:02.605 [2024-07-20 16:59:18.485525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:02.605 [2024-07-20 16:59:18.485604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427138 ] 00:07:02.605 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.605 [2024-07-20 16:59:18.546706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.605 [2024-07-20 16:59:18.636415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=0x1 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=crc32c 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=0 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=software 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=32 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=32 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=1 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val=Yes 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:02.605 16:59:18 -- accel/accel.sh@21 -- # val= 00:07:02.605 16:59:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # IFS=: 00:07:02.605 16:59:18 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@21 -- # val= 00:07:03.980 16:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@21 -- # val= 00:07:03.980 16:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@21 -- # val= 00:07:03.980 16:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@21 -- # val= 00:07:03.980 16:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@21 -- # val= 00:07:03.980 16:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@21 -- # val= 00:07:03.980 16:59:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:03.980 16:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:03.980 16:59:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.980 16:59:19 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:03.980 16:59:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.980 00:07:03.980 real 0m2.805s 00:07:03.980 user 0m2.508s 00:07:03.980 sys 0m0.290s 00:07:03.980 16:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.980 16:59:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.980 ************************************ 00:07:03.980 END TEST accel_crc32c_C2 00:07:03.980 ************************************ 00:07:03.980 16:59:19 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:03.980 16:59:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:03.980 16:59:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.980 16:59:19 -- common/autotest_common.sh@10 -- # set +x 00:07:03.980 ************************************ 00:07:03.980 START TEST accel_copy 00:07:03.980 ************************************ 00:07:03.980 16:59:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:03.980 16:59:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.980 16:59:19 -- accel/accel.sh@17 -- # local accel_module 00:07:03.980 16:59:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:03.980 16:59:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.980 16:59:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.980 16:59:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.980 16:59:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.980 16:59:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.980 16:59:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.980 16:59:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.980 16:59:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.980 16:59:19 -- accel/accel.sh@42 -- # jq -r . 00:07:03.980 [2024-07-20 16:59:19.920711] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.980 [2024-07-20 16:59:19.920790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427291 ] 00:07:03.980 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.980 [2024-07-20 16:59:19.985389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.980 [2024-07-20 16:59:20.084150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.351 16:59:21 -- accel/accel.sh@18 -- # out=' 00:07:05.351 SPDK Configuration: 00:07:05.351 Core mask: 0x1 00:07:05.351 00:07:05.351 Accel Perf Configuration: 00:07:05.351 Workload Type: copy 00:07:05.351 Transfer size: 4096 bytes 00:07:05.351 Vector count 1 00:07:05.351 Module: software 00:07:05.351 Queue depth: 32 00:07:05.351 Allocate depth: 32 00:07:05.351 # threads/core: 1 00:07:05.351 Run time: 1 seconds 00:07:05.351 Verify: Yes 00:07:05.351 00:07:05.351 Running for 1 seconds... 00:07:05.351 00:07:05.351 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.351 ------------------------------------------------------------------------------------ 00:07:05.351 0,0 277696/s 1084 MiB/s 0 0 00:07:05.351 ==================================================================================== 00:07:05.351 Total 277696/s 1084 MiB/s 0 0' 00:07:05.351 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.351 16:59:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:05.351 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.351 16:59:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:05.351 16:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.351 16:59:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.351 16:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.351 16:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.351 16:59:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.351 16:59:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.351 16:59:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.351 16:59:21 -- accel/accel.sh@42 -- # jq -r . 00:07:05.351 [2024-07-20 16:59:21.322481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:05.351 [2024-07-20 16:59:21.322560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427436 ] 00:07:05.351 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.351 [2024-07-20 16:59:21.384162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.351 [2024-07-20 16:59:21.472621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val=0x1 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val=copy 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val=software 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val=32 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val=32 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val=1 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.609 16:59:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.609 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.609 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.610 16:59:21 -- accel/accel.sh@21 -- # val=Yes 00:07:05.610 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.610 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.610 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:05.610 16:59:21 -- accel/accel.sh@21 -- # val= 00:07:05.610 16:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:05.610 16:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:06.543 16:59:22 -- accel/accel.sh@21 -- # val= 00:07:06.543 16:59:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.543 16:59:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.543 16:59:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 16:59:22 -- accel/accel.sh@21 -- # val= 00:07:06.801 16:59:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 16:59:22 -- accel/accel.sh@21 -- # val= 00:07:06.801 16:59:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 16:59:22 -- accel/accel.sh@21 -- # val= 00:07:06.801 16:59:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 16:59:22 -- accel/accel.sh@21 -- # val= 00:07:06.801 16:59:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 16:59:22 -- accel/accel.sh@21 -- # val= 00:07:06.801 16:59:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 16:59:22 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 16:59:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.801 16:59:22 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:06.801 16:59:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.801 00:07:06.801 real 0m2.803s 00:07:06.801 user 0m2.495s 00:07:06.801 sys 0m0.300s 00:07:06.801 16:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.801 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:07:06.801 ************************************ 00:07:06.801 END TEST accel_copy 00:07:06.801 ************************************ 00:07:06.801 16:59:22 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.801 16:59:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:06.801 16:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.801 16:59:22 -- common/autotest_common.sh@10 -- # set +x 00:07:06.801 ************************************ 00:07:06.801 START TEST accel_fill 00:07:06.801 ************************************ 00:07:06.801 16:59:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.801 16:59:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.801 16:59:22 -- accel/accel.sh@17 -- # local accel_module 00:07:06.801 16:59:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.801 16:59:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.801 16:59:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.801 16:59:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.801 16:59:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.801 16:59:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.801 16:59:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.801 16:59:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.801 16:59:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.801 16:59:22 -- accel/accel.sh@42 -- # jq -r . 00:07:06.801 [2024-07-20 16:59:22.745148] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.801 [2024-07-20 16:59:22.745230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427717 ] 00:07:06.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.801 [2024-07-20 16:59:22.807854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.801 [2024-07-20 16:59:22.896491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.199 16:59:24 -- accel/accel.sh@18 -- # out=' 00:07:08.199 SPDK Configuration: 00:07:08.199 Core mask: 0x1 00:07:08.199 00:07:08.199 Accel Perf Configuration: 00:07:08.199 Workload Type: fill 00:07:08.199 Fill pattern: 0x80 00:07:08.199 Transfer size: 4096 bytes 00:07:08.199 Vector count 1 00:07:08.199 Module: software 00:07:08.199 Queue depth: 64 00:07:08.199 Allocate depth: 64 00:07:08.199 # threads/core: 1 00:07:08.199 Run time: 1 seconds 00:07:08.199 Verify: Yes 00:07:08.199 00:07:08.199 Running for 1 seconds... 00:07:08.199 00:07:08.199 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.199 ------------------------------------------------------------------------------------ 00:07:08.199 0,0 411072/s 1605 MiB/s 0 0 00:07:08.199 ==================================================================================== 00:07:08.199 Total 411072/s 1605 MiB/s 0 0' 00:07:08.199 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.199 16:59:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.199 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.199 16:59:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.199 16:59:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.199 16:59:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.199 16:59:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.199 16:59:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.199 16:59:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.199 16:59:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.199 16:59:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.199 16:59:24 -- accel/accel.sh@42 -- # jq -r . 00:07:08.199 [2024-07-20 16:59:24.148328] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:08.199 [2024-07-20 16:59:24.148419] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427860 ] 00:07:08.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.199 [2024-07-20 16:59:24.211428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.199 [2024-07-20 16:59:24.301256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=0x1 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=fill 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=0x80 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=software 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=64 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=64 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=1 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val=Yes 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:08.464 16:59:24 -- accel/accel.sh@21 -- # val= 00:07:08.464 16:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:08.464 16:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:09.396 16:59:25 -- accel/accel.sh@21 -- # val= 00:07:09.396 16:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.396 16:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.396 16:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.396 16:59:25 -- accel/accel.sh@21 -- # val= 00:07:09.396 16:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.396 16:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.396 16:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.396 16:59:25 -- accel/accel.sh@21 -- # val= 00:07:09.396 16:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.396 16:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.396 16:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.396 16:59:25 -- accel/accel.sh@21 -- # val= 00:07:09.396 16:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.397 16:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.397 16:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.397 16:59:25 -- accel/accel.sh@21 -- # val= 00:07:09.397 16:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.397 16:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.397 16:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.397 16:59:25 -- accel/accel.sh@21 -- # val= 00:07:09.397 16:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.397 16:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:09.397 16:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:09.397 16:59:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.397 16:59:25 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:09.397 16:59:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.397 00:07:09.397 real 0m2.808s 00:07:09.397 user 0m2.519s 00:07:09.397 sys 0m0.280s 00:07:09.397 16:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.397 16:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:09.397 ************************************ 00:07:09.397 END TEST accel_fill 00:07:09.397 ************************************ 00:07:09.654 16:59:25 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:09.654 16:59:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:09.654 16:59:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.654 16:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:09.654 ************************************ 00:07:09.654 START TEST accel_copy_crc32c 00:07:09.654 ************************************ 00:07:09.654 16:59:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:09.654 16:59:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.654 16:59:25 -- accel/accel.sh@17 -- # local accel_module 00:07:09.654 16:59:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.654 16:59:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.654 16:59:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.654 16:59:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.654 16:59:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.654 16:59:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.655 16:59:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.655 16:59:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.655 16:59:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.655 16:59:25 -- accel/accel.sh@42 -- # jq -r . 00:07:09.655 [2024-07-20 16:59:25.580450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:09.655 [2024-07-20 16:59:25.580528] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428020 ] 00:07:09.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.655 [2024-07-20 16:59:25.644319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.655 [2024-07-20 16:59:25.731910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.025 16:59:26 -- accel/accel.sh@18 -- # out=' 00:07:11.025 SPDK Configuration: 00:07:11.025 Core mask: 0x1 00:07:11.025 00:07:11.025 Accel Perf Configuration: 00:07:11.025 Workload Type: copy_crc32c 00:07:11.025 CRC-32C seed: 0 00:07:11.025 Vector size: 4096 bytes 00:07:11.025 Transfer size: 4096 bytes 00:07:11.025 Vector count 1 00:07:11.025 Module: software 00:07:11.025 Queue depth: 32 00:07:11.025 Allocate depth: 32 00:07:11.025 # threads/core: 1 00:07:11.025 Run time: 1 seconds 00:07:11.025 Verify: Yes 00:07:11.025 00:07:11.025 Running for 1 seconds... 00:07:11.025 00:07:11.025 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.025 ------------------------------------------------------------------------------------ 00:07:11.025 0,0 218336/s 852 MiB/s 0 0 00:07:11.025 ==================================================================================== 00:07:11.025 Total 218336/s 852 MiB/s 0 0' 00:07:11.025 16:59:26 -- accel/accel.sh@20 -- # IFS=: 00:07:11.025 16:59:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:11.025 16:59:26 -- accel/accel.sh@20 -- # read -r var val 00:07:11.025 16:59:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:11.025 16:59:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.025 16:59:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.025 16:59:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.025 16:59:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.025 16:59:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.025 16:59:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.025 16:59:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.025 16:59:26 -- accel/accel.sh@42 -- # jq -r . 00:07:11.025 [2024-07-20 16:59:26.973601] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:11.025 [2024-07-20 16:59:26.973694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428165 ] 00:07:11.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.025 [2024-07-20 16:59:27.034874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.025 [2024-07-20 16:59:27.124240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=0x1 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=0 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=software 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=32 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=32 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=1 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val=Yes 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:11.283 16:59:27 -- accel/accel.sh@21 -- # val= 00:07:11.283 16:59:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:11.283 16:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@21 -- # val= 00:07:12.218 16:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@21 -- # val= 00:07:12.218 16:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@21 -- # val= 00:07:12.218 16:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@21 -- # val= 00:07:12.218 16:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@21 -- # val= 00:07:12.218 16:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@21 -- # val= 00:07:12.218 16:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:12.218 16:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:12.218 16:59:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.218 16:59:28 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:12.218 16:59:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.218 00:07:12.218 real 0m2.795s 00:07:12.218 user 0m2.506s 00:07:12.218 sys 0m0.281s 00:07:12.218 16:59:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.218 16:59:28 -- common/autotest_common.sh@10 -- # set +x 00:07:12.218 ************************************ 00:07:12.218 END TEST accel_copy_crc32c 00:07:12.218 ************************************ 00:07:12.477 16:59:28 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.477 16:59:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:12.477 16:59:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.477 16:59:28 -- common/autotest_common.sh@10 -- # set +x 00:07:12.477 ************************************ 00:07:12.477 START TEST accel_copy_crc32c_C2 00:07:12.477 ************************************ 00:07:12.477 16:59:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:12.477 16:59:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.477 16:59:28 -- accel/accel.sh@17 -- # local accel_module 00:07:12.477 16:59:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:12.477 16:59:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:12.477 16:59:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.477 16:59:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.477 16:59:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.477 16:59:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.477 16:59:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.477 16:59:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.477 16:59:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.477 16:59:28 -- accel/accel.sh@42 -- # jq -r . 00:07:12.477 [2024-07-20 16:59:28.403969] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:12.477 [2024-07-20 16:59:28.404048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428437 ] 00:07:12.477 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.477 [2024-07-20 16:59:28.467315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.477 [2024-07-20 16:59:28.555177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.852 16:59:29 -- accel/accel.sh@18 -- # out=' 00:07:13.852 SPDK Configuration: 00:07:13.852 Core mask: 0x1 00:07:13.852 00:07:13.852 Accel Perf Configuration: 00:07:13.852 Workload Type: copy_crc32c 00:07:13.852 CRC-32C seed: 0 00:07:13.852 Vector size: 4096 bytes 00:07:13.852 Transfer size: 8192 bytes 00:07:13.852 Vector count 2 00:07:13.852 Module: software 00:07:13.852 Queue depth: 32 00:07:13.852 Allocate depth: 32 00:07:13.852 # threads/core: 1 00:07:13.852 Run time: 1 seconds 00:07:13.852 Verify: Yes 00:07:13.852 00:07:13.852 Running for 1 seconds... 00:07:13.852 00:07:13.852 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.852 ------------------------------------------------------------------------------------ 00:07:13.852 0,0 155040/s 1211 MiB/s 0 0 00:07:13.852 ==================================================================================== 00:07:13.852 Total 155040/s 605 MiB/s 0 0' 00:07:13.852 16:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:13.852 16:59:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:13.852 16:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:13.852 16:59:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:13.852 16:59:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.852 16:59:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.852 16:59:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.852 16:59:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.852 16:59:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.852 16:59:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.852 16:59:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.852 16:59:29 -- accel/accel.sh@42 -- # jq -r . 00:07:13.853 [2024-07-20 16:59:29.798761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.853 [2024-07-20 16:59:29.798873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428588 ] 00:07:13.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.853 [2024-07-20 16:59:29.860589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.853 [2024-07-20 16:59:29.950315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=0x1 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=0 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=software 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=32 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=32 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=1 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.111 16:59:30 -- accel/accel.sh@21 -- # val=Yes 00:07:14.111 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.111 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.112 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.112 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.112 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.112 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.112 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:14.112 16:59:30 -- accel/accel.sh@21 -- # val= 00:07:14.112 16:59:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.112 16:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:14.112 16:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@21 -- # val= 00:07:15.046 16:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@21 -- # val= 00:07:15.046 16:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@21 -- # val= 00:07:15.046 16:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@21 -- # val= 00:07:15.046 16:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@21 -- # val= 00:07:15.046 16:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@21 -- # val= 00:07:15.046 16:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:15.046 16:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:15.046 16:59:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.046 16:59:31 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:15.046 16:59:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.046 00:07:15.046 real 0m2.804s 00:07:15.046 user 0m2.509s 00:07:15.046 sys 0m0.289s 00:07:15.046 16:59:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.046 16:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.046 ************************************ 00:07:15.046 END TEST accel_copy_crc32c_C2 00:07:15.046 ************************************ 00:07:15.305 16:59:31 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:15.305 16:59:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:15.305 16:59:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.305 16:59:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.305 ************************************ 00:07:15.305 START TEST accel_dualcast 00:07:15.305 ************************************ 00:07:15.305 16:59:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:15.305 16:59:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.305 16:59:31 -- accel/accel.sh@17 -- # local accel_module 00:07:15.305 16:59:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:15.305 16:59:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:15.305 16:59:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.305 16:59:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.305 16:59:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.305 16:59:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.305 16:59:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.305 16:59:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.305 16:59:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.305 16:59:31 -- accel/accel.sh@42 -- # jq -r . 00:07:15.305 [2024-07-20 16:59:31.236939] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:15.305 [2024-07-20 16:59:31.237021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428741 ] 00:07:15.305 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.305 [2024-07-20 16:59:31.300486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.305 [2024-07-20 16:59:31.388677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.678 16:59:32 -- accel/accel.sh@18 -- # out=' 00:07:16.678 SPDK Configuration: 00:07:16.678 Core mask: 0x1 00:07:16.678 00:07:16.678 Accel Perf Configuration: 00:07:16.678 Workload Type: dualcast 00:07:16.678 Transfer size: 4096 bytes 00:07:16.678 Vector count 1 00:07:16.678 Module: software 00:07:16.678 Queue depth: 32 00:07:16.678 Allocate depth: 32 00:07:16.678 # threads/core: 1 00:07:16.678 Run time: 1 seconds 00:07:16.678 Verify: Yes 00:07:16.678 00:07:16.678 Running for 1 seconds... 00:07:16.678 00:07:16.678 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.678 ------------------------------------------------------------------------------------ 00:07:16.678 0,0 297440/s 1161 MiB/s 0 0 00:07:16.678 ==================================================================================== 00:07:16.678 Total 297440/s 1161 MiB/s 0 0' 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.678 16:59:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.678 16:59:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:16.678 16:59:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.678 16:59:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.678 16:59:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.678 16:59:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.678 16:59:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.678 16:59:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.678 16:59:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.678 16:59:32 -- accel/accel.sh@42 -- # jq -r . 00:07:16.678 [2024-07-20 16:59:32.627061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:16.678 [2024-07-20 16:59:32.627157] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428891 ] 00:07:16.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.678 [2024-07-20 16:59:32.689033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.678 [2024-07-20 16:59:32.777306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.678 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.678 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.678 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.678 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.678 16:59:32 -- accel/accel.sh@21 -- # val=0x1 00:07:16.678 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.678 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val=dualcast 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val=software 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val=32 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val=32 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val=1 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val=Yes 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:16.935 16:59:32 -- accel/accel.sh@21 -- # val= 00:07:16.935 16:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:16.935 16:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@21 -- # val= 00:07:17.866 16:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@21 -- # val= 00:07:17.866 16:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@21 -- # val= 00:07:17.866 16:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@21 -- # val= 00:07:17.866 16:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@21 -- # val= 00:07:17.866 16:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@21 -- # val= 00:07:17.866 16:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:17.866 16:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:17.866 16:59:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.866 16:59:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:17.866 16:59:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.866 00:07:17.866 real 0m2.778s 00:07:17.866 user 0m2.500s 00:07:17.866 sys 0m0.270s 00:07:17.866 16:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.866 16:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:17.866 ************************************ 00:07:17.866 END TEST accel_dualcast 00:07:17.866 ************************************ 00:07:17.866 16:59:34 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:17.866 16:59:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:17.866 16:59:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.866 16:59:34 -- common/autotest_common.sh@10 -- # set +x 00:07:17.866 ************************************ 00:07:17.866 START TEST accel_compare 00:07:17.866 ************************************ 00:07:17.866 16:59:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:17.866 16:59:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.866 16:59:34 -- accel/accel.sh@17 -- # local accel_module 00:07:17.866 16:59:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:17.866 16:59:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.866 16:59:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.866 16:59:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.866 16:59:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.866 16:59:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.866 16:59:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.866 16:59:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.866 16:59:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.866 16:59:34 -- accel/accel.sh@42 -- # jq -r . 00:07:18.123 [2024-07-20 16:59:34.037922] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:18.123 [2024-07-20 16:59:34.037996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429164 ] 00:07:18.123 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.123 [2024-07-20 16:59:34.102145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.123 [2024-07-20 16:59:34.192187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.493 16:59:35 -- accel/accel.sh@18 -- # out=' 00:07:19.493 SPDK Configuration: 00:07:19.493 Core mask: 0x1 00:07:19.493 00:07:19.493 Accel Perf Configuration: 00:07:19.493 Workload Type: compare 00:07:19.493 Transfer size: 4096 bytes 00:07:19.493 Vector count 1 00:07:19.493 Module: software 00:07:19.493 Queue depth: 32 00:07:19.493 Allocate depth: 32 00:07:19.493 # threads/core: 1 00:07:19.493 Run time: 1 seconds 00:07:19.493 Verify: Yes 00:07:19.493 00:07:19.493 Running for 1 seconds... 00:07:19.493 00:07:19.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.493 ------------------------------------------------------------------------------------ 00:07:19.493 0,0 398624/s 1557 MiB/s 0 0 00:07:19.493 ==================================================================================== 00:07:19.493 Total 398624/s 1557 MiB/s 0 0' 00:07:19.493 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.493 16:59:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:19.493 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.493 16:59:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:19.493 16:59:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.493 16:59:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.493 16:59:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.493 16:59:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.493 16:59:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.493 16:59:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.493 16:59:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.493 16:59:35 -- accel/accel.sh@42 -- # jq -r . 00:07:19.493 [2024-07-20 16:59:35.439889] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.493 [2024-07-20 16:59:35.439967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429305 ] 00:07:19.493 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.493 [2024-07-20 16:59:35.502700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.493 [2024-07-20 16:59:35.594075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=0x1 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=compare 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=software 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=32 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=32 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=1 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val=Yes 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:19.751 16:59:35 -- accel/accel.sh@21 -- # val= 00:07:19.751 16:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:19.751 16:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@21 -- # val= 00:07:20.685 16:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@21 -- # val= 00:07:20.685 16:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@21 -- # val= 00:07:20.685 16:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@21 -- # val= 00:07:20.685 16:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@21 -- # val= 00:07:20.685 16:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@21 -- # val= 00:07:20.685 16:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:20.685 16:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:20.685 16:59:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.685 16:59:36 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:20.685 16:59:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.685 00:07:20.685 real 0m2.809s 00:07:20.685 user 0m2.513s 00:07:20.685 sys 0m0.288s 00:07:20.685 16:59:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.685 16:59:36 -- common/autotest_common.sh@10 -- # set +x 00:07:20.685 ************************************ 00:07:20.685 END TEST accel_compare 00:07:20.685 ************************************ 00:07:20.943 16:59:36 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:20.943 16:59:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:20.943 16:59:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.943 16:59:36 -- common/autotest_common.sh@10 -- # set +x 00:07:20.943 ************************************ 00:07:20.943 START TEST accel_xor 00:07:20.943 ************************************ 00:07:20.943 16:59:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:20.943 16:59:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.943 16:59:36 -- accel/accel.sh@17 -- # local accel_module 00:07:20.943 16:59:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:20.943 16:59:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:20.943 16:59:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.943 16:59:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.943 16:59:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.943 16:59:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.943 16:59:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.943 16:59:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.943 16:59:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.943 16:59:36 -- accel/accel.sh@42 -- # jq -r . 00:07:20.943 [2024-07-20 16:59:36.872583] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.943 [2024-07-20 16:59:36.872663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429469 ] 00:07:20.943 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.943 [2024-07-20 16:59:36.934968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.943 [2024-07-20 16:59:37.025546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.325 16:59:38 -- accel/accel.sh@18 -- # out=' 00:07:22.325 SPDK Configuration: 00:07:22.325 Core mask: 0x1 00:07:22.325 00:07:22.325 Accel Perf Configuration: 00:07:22.325 Workload Type: xor 00:07:22.325 Source buffers: 2 00:07:22.325 Transfer size: 4096 bytes 00:07:22.325 Vector count 1 00:07:22.325 Module: software 00:07:22.325 Queue depth: 32 00:07:22.325 Allocate depth: 32 00:07:22.325 # threads/core: 1 00:07:22.325 Run time: 1 seconds 00:07:22.325 Verify: Yes 00:07:22.325 00:07:22.325 Running for 1 seconds... 00:07:22.325 00:07:22.325 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.325 ------------------------------------------------------------------------------------ 00:07:22.325 0,0 193600/s 756 MiB/s 0 0 00:07:22.325 ==================================================================================== 00:07:22.325 Total 193600/s 756 MiB/s 0 0' 00:07:22.325 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 16:59:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:22.325 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 16:59:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:22.325 16:59:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.325 16:59:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.325 16:59:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.325 16:59:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.325 16:59:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.325 16:59:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.325 16:59:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.325 16:59:38 -- accel/accel.sh@42 -- # jq -r . 00:07:22.325 [2024-07-20 16:59:38.270460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.325 [2024-07-20 16:59:38.270542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429611 ] 00:07:22.325 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.325 [2024-07-20 16:59:38.331956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.325 [2024-07-20 16:59:38.421407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=0x1 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=xor 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=2 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=software 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=32 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=32 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=1 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val=Yes 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:22.610 16:59:38 -- accel/accel.sh@21 -- # val= 00:07:22.610 16:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:22.610 16:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@21 -- # val= 00:07:23.544 16:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@21 -- # val= 00:07:23.544 16:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@21 -- # val= 00:07:23.544 16:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@21 -- # val= 00:07:23.544 16:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@21 -- # val= 00:07:23.544 16:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@21 -- # val= 00:07:23.544 16:59:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:23.544 16:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:23.544 16:59:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.544 16:59:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:23.544 16:59:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.544 00:07:23.544 real 0m2.786s 00:07:23.544 user 0m2.495s 00:07:23.544 sys 0m0.282s 00:07:23.544 16:59:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.544 16:59:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.544 ************************************ 00:07:23.544 END TEST accel_xor 00:07:23.544 ************************************ 00:07:23.544 16:59:39 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:23.544 16:59:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:23.544 16:59:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.544 16:59:39 -- common/autotest_common.sh@10 -- # set +x 00:07:23.544 ************************************ 00:07:23.544 START TEST accel_xor 00:07:23.544 ************************************ 00:07:23.544 16:59:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:23.544 16:59:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.544 16:59:39 -- accel/accel.sh@17 -- # local accel_module 00:07:23.544 16:59:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.544 16:59:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.544 16:59:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.544 16:59:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.544 16:59:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.544 16:59:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.544 16:59:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.544 16:59:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.544 16:59:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.544 16:59:39 -- accel/accel.sh@42 -- # jq -r . 00:07:23.544 [2024-07-20 16:59:39.682163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:23.544 [2024-07-20 16:59:39.682251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid429893 ] 00:07:23.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.803 [2024-07-20 16:59:39.745495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.803 [2024-07-20 16:59:39.835655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.176 16:59:41 -- accel/accel.sh@18 -- # out=' 00:07:25.176 SPDK Configuration: 00:07:25.176 Core mask: 0x1 00:07:25.176 00:07:25.176 Accel Perf Configuration: 00:07:25.176 Workload Type: xor 00:07:25.176 Source buffers: 3 00:07:25.176 Transfer size: 4096 bytes 00:07:25.176 Vector count 1 00:07:25.176 Module: software 00:07:25.176 Queue depth: 32 00:07:25.176 Allocate depth: 32 00:07:25.176 # threads/core: 1 00:07:25.176 Run time: 1 seconds 00:07:25.176 Verify: Yes 00:07:25.176 00:07:25.176 Running for 1 seconds... 00:07:25.176 00:07:25.176 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.176 ------------------------------------------------------------------------------------ 00:07:25.176 0,0 185024/s 722 MiB/s 0 0 00:07:25.176 ==================================================================================== 00:07:25.176 Total 185024/s 722 MiB/s 0 0' 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:25.176 16:59:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.176 16:59:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.176 16:59:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.176 16:59:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.176 16:59:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.176 16:59:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.176 16:59:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.176 16:59:41 -- accel/accel.sh@42 -- # jq -r . 00:07:25.176 [2024-07-20 16:59:41.085247] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.176 [2024-07-20 16:59:41.085332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430031 ] 00:07:25.176 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.176 [2024-07-20 16:59:41.147674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.176 [2024-07-20 16:59:41.238734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val=0x1 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val=xor 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val=3 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.176 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.176 16:59:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.176 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val=software 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val=32 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val=32 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val=1 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val=Yes 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:25.177 16:59:41 -- accel/accel.sh@21 -- # val= 00:07:25.177 16:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:25.177 16:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@21 -- # val= 00:07:26.549 16:59:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@21 -- # val= 00:07:26.549 16:59:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@21 -- # val= 00:07:26.549 16:59:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@21 -- # val= 00:07:26.549 16:59:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@21 -- # val= 00:07:26.549 16:59:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@21 -- # val= 00:07:26.549 16:59:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # IFS=: 00:07:26.549 16:59:42 -- accel/accel.sh@20 -- # read -r var val 00:07:26.549 16:59:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.549 16:59:42 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:26.549 16:59:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.549 00:07:26.549 real 0m2.799s 00:07:26.549 user 0m2.508s 00:07:26.549 sys 0m0.283s 00:07:26.549 16:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.549 16:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:26.549 ************************************ 00:07:26.549 END TEST accel_xor 00:07:26.549 ************************************ 00:07:26.549 16:59:42 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:26.549 16:59:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:26.549 16:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.549 16:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:26.549 ************************************ 00:07:26.549 START TEST accel_dif_verify 00:07:26.549 ************************************ 00:07:26.549 16:59:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:26.549 16:59:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.549 16:59:42 -- accel/accel.sh@17 -- # local accel_module 00:07:26.549 16:59:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:26.549 16:59:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.549 16:59:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.549 16:59:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.549 16:59:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.549 16:59:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.549 16:59:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.549 16:59:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.549 16:59:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.549 16:59:42 -- accel/accel.sh@42 -- # jq -r . 00:07:26.549 [2024-07-20 16:59:42.502211] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.549 [2024-07-20 16:59:42.502286] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430191 ] 00:07:26.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.549 [2024-07-20 16:59:42.563384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.549 [2024-07-20 16:59:42.653207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.919 16:59:43 -- accel/accel.sh@18 -- # out=' 00:07:27.919 SPDK Configuration: 00:07:27.919 Core mask: 0x1 00:07:27.919 00:07:27.919 Accel Perf Configuration: 00:07:27.919 Workload Type: dif_verify 00:07:27.919 Vector size: 4096 bytes 00:07:27.919 Transfer size: 4096 bytes 00:07:27.919 Block size: 512 bytes 00:07:27.919 Metadata size: 8 bytes 00:07:27.919 Vector count 1 00:07:27.919 Module: software 00:07:27.919 Queue depth: 32 00:07:27.919 Allocate depth: 32 00:07:27.919 # threads/core: 1 00:07:27.919 Run time: 1 seconds 00:07:27.919 Verify: No 00:07:27.919 00:07:27.919 Running for 1 seconds... 00:07:27.919 00:07:27.919 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.919 ------------------------------------------------------------------------------------ 00:07:27.919 0,0 81952/s 325 MiB/s 0 0 00:07:27.919 ==================================================================================== 00:07:27.919 Total 81952/s 320 MiB/s 0 0' 00:07:27.919 16:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:27.919 16:59:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:27.919 16:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:27.919 16:59:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:27.919 16:59:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.919 16:59:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.919 16:59:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.919 16:59:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.919 16:59:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.919 16:59:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.919 16:59:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.919 16:59:43 -- accel/accel.sh@42 -- # jq -r . 00:07:27.919 [2024-07-20 16:59:43.903675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:27.920 [2024-07-20 16:59:43.903747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430339 ] 00:07:27.920 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.920 [2024-07-20 16:59:43.963934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.920 [2024-07-20 16:59:44.052956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.177 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.177 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.177 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 16:59:44 -- accel/accel.sh@21 -- # val=0x1 00:07:28.177 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.177 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.177 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.177 16:59:44 -- accel/accel.sh@21 -- # val=dif_verify 00:07:28.177 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.177 16:59:44 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.177 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val=software 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val=32 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val=32 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val=1 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val=No 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:28.178 16:59:44 -- accel/accel.sh@21 -- # val= 00:07:28.178 16:59:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # IFS=: 00:07:28.178 16:59:44 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@21 -- # val= 00:07:29.549 16:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@21 -- # val= 00:07:29.549 16:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@21 -- # val= 00:07:29.549 16:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@21 -- # val= 00:07:29.549 16:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@21 -- # val= 00:07:29.549 16:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@21 -- # val= 00:07:29.549 16:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:29.549 16:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:29.549 16:59:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.549 16:59:45 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:29.549 16:59:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.549 00:07:29.549 real 0m2.798s 00:07:29.549 user 0m2.509s 00:07:29.549 sys 0m0.285s 00:07:29.549 16:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.549 16:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.549 ************************************ 00:07:29.549 END TEST accel_dif_verify 00:07:29.549 ************************************ 00:07:29.549 16:59:45 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:29.549 16:59:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:29.549 16:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.549 16:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:29.549 ************************************ 00:07:29.549 START TEST accel_dif_generate 00:07:29.549 ************************************ 00:07:29.549 16:59:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:29.549 16:59:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.549 16:59:45 -- accel/accel.sh@17 -- # local accel_module 00:07:29.549 16:59:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:29.549 16:59:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:29.549 16:59:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.549 16:59:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.549 16:59:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.549 16:59:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.549 16:59:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.549 16:59:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.549 16:59:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.549 16:59:45 -- accel/accel.sh@42 -- # jq -r . 00:07:29.549 [2024-07-20 16:59:45.324640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.549 [2024-07-20 16:59:45.324726] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430572 ] 00:07:29.549 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.549 [2024-07-20 16:59:45.386835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.549 [2024-07-20 16:59:45.477162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.921 16:59:46 -- accel/accel.sh@18 -- # out=' 00:07:30.921 SPDK Configuration: 00:07:30.921 Core mask: 0x1 00:07:30.921 00:07:30.921 Accel Perf Configuration: 00:07:30.921 Workload Type: dif_generate 00:07:30.921 Vector size: 4096 bytes 00:07:30.921 Transfer size: 4096 bytes 00:07:30.921 Block size: 512 bytes 00:07:30.921 Metadata size: 8 bytes 00:07:30.921 Vector count 1 00:07:30.921 Module: software 00:07:30.921 Queue depth: 32 00:07:30.921 Allocate depth: 32 00:07:30.921 # threads/core: 1 00:07:30.921 Run time: 1 seconds 00:07:30.921 Verify: No 00:07:30.921 00:07:30.921 Running for 1 seconds... 00:07:30.921 00:07:30.921 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.921 ------------------------------------------------------------------------------------ 00:07:30.921 0,0 96384/s 382 MiB/s 0 0 00:07:30.921 ==================================================================================== 00:07:30.921 Total 96384/s 376 MiB/s 0 0' 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:30.921 16:59:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.921 16:59:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.921 16:59:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.921 16:59:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.921 16:59:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.921 16:59:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.921 16:59:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.921 16:59:46 -- accel/accel.sh@42 -- # jq -r . 00:07:30.921 [2024-07-20 16:59:46.727344] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:30.921 [2024-07-20 16:59:46.727420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430758 ] 00:07:30.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.921 [2024-07-20 16:59:46.788291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.921 [2024-07-20 16:59:46.877966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val=0x1 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val=dif_generate 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.921 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.921 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.921 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val=software 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val=32 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val=32 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val=1 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val=No 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:30.922 16:59:46 -- accel/accel.sh@21 -- # val= 00:07:30.922 16:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:30.922 16:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@21 -- # val= 00:07:32.295 16:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@21 -- # val= 00:07:32.295 16:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@21 -- # val= 00:07:32.295 16:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@21 -- # val= 00:07:32.295 16:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@21 -- # val= 00:07:32.295 16:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@21 -- # val= 00:07:32.295 16:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:32.295 16:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:32.295 16:59:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.295 16:59:48 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:32.295 16:59:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.295 00:07:32.295 real 0m2.800s 00:07:32.295 user 0m2.509s 00:07:32.295 sys 0m0.286s 00:07:32.295 16:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.295 16:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:32.295 ************************************ 00:07:32.296 END TEST accel_dif_generate 00:07:32.296 ************************************ 00:07:32.296 16:59:48 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:32.296 16:59:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:32.296 16:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.296 16:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:32.296 ************************************ 00:07:32.296 START TEST accel_dif_generate_copy 00:07:32.296 ************************************ 00:07:32.296 16:59:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:32.296 16:59:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.296 16:59:48 -- accel/accel.sh@17 -- # local accel_module 00:07:32.296 16:59:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.296 16:59:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.296 16:59:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.296 16:59:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.296 16:59:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.296 16:59:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.296 16:59:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.296 16:59:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.296 16:59:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.296 16:59:48 -- accel/accel.sh@42 -- # jq -r . 00:07:32.296 [2024-07-20 16:59:48.146387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.296 [2024-07-20 16:59:48.146472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430917 ] 00:07:32.296 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.296 [2024-07-20 16:59:48.210690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.296 [2024-07-20 16:59:48.300667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.669 16:59:49 -- accel/accel.sh@18 -- # out=' 00:07:33.669 SPDK Configuration: 00:07:33.669 Core mask: 0x1 00:07:33.669 00:07:33.669 Accel Perf Configuration: 00:07:33.669 Workload Type: dif_generate_copy 00:07:33.669 Vector size: 4096 bytes 00:07:33.669 Transfer size: 4096 bytes 00:07:33.669 Vector count 1 00:07:33.669 Module: software 00:07:33.669 Queue depth: 32 00:07:33.669 Allocate depth: 32 00:07:33.669 # threads/core: 1 00:07:33.669 Run time: 1 seconds 00:07:33.669 Verify: No 00:07:33.669 00:07:33.669 Running for 1 seconds... 00:07:33.669 00:07:33.669 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.669 ------------------------------------------------------------------------------------ 00:07:33.669 0,0 76064/s 301 MiB/s 0 0 00:07:33.669 ==================================================================================== 00:07:33.669 Total 76064/s 297 MiB/s 0 0' 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:33.669 16:59:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.669 16:59:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.669 16:59:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.669 16:59:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.669 16:59:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.669 16:59:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.669 16:59:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.669 16:59:49 -- accel/accel.sh@42 -- # jq -r . 00:07:33.669 [2024-07-20 16:59:49.552477] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:33.669 [2024-07-20 16:59:49.552546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431056 ] 00:07:33.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.669 [2024-07-20 16:59:49.613025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.669 [2024-07-20 16:59:49.702045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val=0x1 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val=software 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val=32 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val=32 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val=1 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.669 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.669 16:59:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.669 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.670 16:59:49 -- accel/accel.sh@21 -- # val=No 00:07:33.670 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.670 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.670 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:33.670 16:59:49 -- accel/accel.sh@21 -- # val= 00:07:33.670 16:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:33.670 16:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@21 -- # val= 00:07:35.045 16:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@21 -- # val= 00:07:35.045 16:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@21 -- # val= 00:07:35.045 16:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@21 -- # val= 00:07:35.045 16:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@21 -- # val= 00:07:35.045 16:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@21 -- # val= 00:07:35.045 16:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:35.045 16:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:35.045 16:59:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.045 16:59:50 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:35.045 16:59:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.045 00:07:35.045 real 0m2.799s 00:07:35.045 user 0m2.508s 00:07:35.045 sys 0m0.283s 00:07:35.045 16:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.045 16:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:35.045 ************************************ 00:07:35.045 END TEST accel_dif_generate_copy 00:07:35.045 ************************************ 00:07:35.045 16:59:50 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:35.045 16:59:50 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.045 16:59:50 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:35.045 16:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.045 16:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:35.045 ************************************ 00:07:35.045 START TEST accel_comp 00:07:35.045 ************************************ 00:07:35.045 16:59:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.045 16:59:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.045 16:59:50 -- accel/accel.sh@17 -- # local accel_module 00:07:35.045 16:59:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.045 16:59:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.045 16:59:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.045 16:59:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.045 16:59:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.045 16:59:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.045 16:59:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.045 16:59:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.045 16:59:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.045 16:59:50 -- accel/accel.sh@42 -- # jq -r . 00:07:35.045 [2024-07-20 16:59:50.968076] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:35.045 [2024-07-20 16:59:50.968180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431246 ] 00:07:35.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.045 [2024-07-20 16:59:51.032029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.045 [2024-07-20 16:59:51.121325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.420 16:59:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.420 00:07:36.420 SPDK Configuration: 00:07:36.420 Core mask: 0x1 00:07:36.420 00:07:36.420 Accel Perf Configuration: 00:07:36.420 Workload Type: compress 00:07:36.420 Transfer size: 4096 bytes 00:07:36.420 Vector count 1 00:07:36.420 Module: software 00:07:36.420 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.420 Queue depth: 32 00:07:36.420 Allocate depth: 32 00:07:36.420 # threads/core: 1 00:07:36.420 Run time: 1 seconds 00:07:36.420 Verify: No 00:07:36.420 00:07:36.420 Running for 1 seconds... 00:07:36.420 00:07:36.420 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.420 ------------------------------------------------------------------------------------ 00:07:36.420 0,0 32064/s 133 MiB/s 0 0 00:07:36.420 ==================================================================================== 00:07:36.420 Total 32064/s 125 MiB/s 0 0' 00:07:36.420 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.420 16:59:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.420 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.420 16:59:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.420 16:59:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.420 16:59:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.420 16:59:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.420 16:59:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.420 16:59:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.420 16:59:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.420 16:59:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.420 16:59:52 -- accel/accel.sh@42 -- # jq -r . 00:07:36.420 [2024-07-20 16:59:52.374375] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:36.420 [2024-07-20 16:59:52.374451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431479 ] 00:07:36.420 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.420 [2024-07-20 16:59:52.435608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.420 [2024-07-20 16:59:52.525313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.678 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=0x1 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=compress 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=software 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=32 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=32 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=1 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val=No 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:36.679 16:59:52 -- accel/accel.sh@21 -- # val= 00:07:36.679 16:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:36.679 16:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@21 -- # val= 00:07:37.623 16:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@21 -- # val= 00:07:37.623 16:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@21 -- # val= 00:07:37.623 16:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@21 -- # val= 00:07:37.623 16:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@21 -- # val= 00:07:37.623 16:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@21 -- # val= 00:07:37.623 16:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:37.623 16:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:37.623 16:59:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.623 16:59:53 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:37.623 16:59:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.623 00:07:37.623 real 0m2.816s 00:07:37.623 user 0m2.518s 00:07:37.623 sys 0m0.292s 00:07:37.623 16:59:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.623 16:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:37.623 ************************************ 00:07:37.623 END TEST accel_comp 00:07:37.623 ************************************ 00:07:37.887 16:59:53 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.887 16:59:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:37.887 16:59:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.887 16:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:37.887 ************************************ 00:07:37.887 START TEST accel_decomp 00:07:37.887 ************************************ 00:07:37.887 16:59:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.887 16:59:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.887 16:59:53 -- accel/accel.sh@17 -- # local accel_module 00:07:37.887 16:59:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.887 16:59:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:37.887 16:59:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.887 16:59:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.887 16:59:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.887 16:59:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.887 16:59:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.887 16:59:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.887 16:59:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.887 16:59:53 -- accel/accel.sh@42 -- # jq -r . 00:07:37.887 [2024-07-20 16:59:53.810960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:37.887 [2024-07-20 16:59:53.811039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431643 ] 00:07:37.887 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.887 [2024-07-20 16:59:53.873907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.887 [2024-07-20 16:59:53.962856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.259 16:59:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.259 00:07:39.259 SPDK Configuration: 00:07:39.259 Core mask: 0x1 00:07:39.259 00:07:39.259 Accel Perf Configuration: 00:07:39.259 Workload Type: decompress 00:07:39.259 Transfer size: 4096 bytes 00:07:39.259 Vector count 1 00:07:39.259 Module: software 00:07:39.259 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.259 Queue depth: 32 00:07:39.259 Allocate depth: 32 00:07:39.259 # threads/core: 1 00:07:39.259 Run time: 1 seconds 00:07:39.259 Verify: Yes 00:07:39.259 00:07:39.259 Running for 1 seconds... 00:07:39.259 00:07:39.259 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.259 ------------------------------------------------------------------------------------ 00:07:39.259 0,0 55648/s 102 MiB/s 0 0 00:07:39.259 ==================================================================================== 00:07:39.259 Total 55648/s 217 MiB/s 0 0' 00:07:39.259 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.259 16:59:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:39.259 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.259 16:59:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:39.259 16:59:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.259 16:59:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.259 16:59:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.259 16:59:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.259 16:59:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.259 16:59:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.259 16:59:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.259 16:59:55 -- accel/accel.sh@42 -- # jq -r . 00:07:39.259 [2024-07-20 16:59:55.215785] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:39.259 [2024-07-20 16:59:55.215901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431784 ] 00:07:39.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.259 [2024-07-20 16:59:55.278210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.259 [2024-07-20 16:59:55.365498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=0x1 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=decompress 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=software 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=32 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=32 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=1 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val=Yes 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:39.516 16:59:55 -- accel/accel.sh@21 -- # val= 00:07:39.516 16:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:39.516 16:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@21 -- # val= 00:07:40.448 16:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@21 -- # val= 00:07:40.448 16:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@21 -- # val= 00:07:40.448 16:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@21 -- # val= 00:07:40.448 16:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@21 -- # val= 00:07:40.448 16:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@21 -- # val= 00:07:40.448 16:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:40.448 16:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:40.448 16:59:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.448 16:59:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.448 16:59:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.448 00:07:40.448 real 0m2.795s 00:07:40.448 user 0m2.500s 00:07:40.448 sys 0m0.289s 00:07:40.448 16:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.448 16:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:40.448 ************************************ 00:07:40.448 END TEST accel_decomp 00:07:40.448 ************************************ 00:07:40.705 16:59:56 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.705 16:59:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:40.705 16:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.705 16:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:40.705 ************************************ 00:07:40.705 START TEST accel_decmop_full 00:07:40.705 ************************************ 00:07:40.705 16:59:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.705 16:59:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.705 16:59:56 -- accel/accel.sh@17 -- # local accel_module 00:07:40.705 16:59:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.705 16:59:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.705 16:59:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.705 16:59:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.705 16:59:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.705 16:59:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.705 16:59:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.705 16:59:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.705 16:59:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.705 16:59:56 -- accel/accel.sh@42 -- # jq -r . 00:07:40.705 [2024-07-20 16:59:56.629072] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:40.705 [2024-07-20 16:59:56.629166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid431948 ] 00:07:40.705 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.705 [2024-07-20 16:59:56.691753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.705 [2024-07-20 16:59:56.782352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.078 16:59:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:42.078 00:07:42.078 SPDK Configuration: 00:07:42.078 Core mask: 0x1 00:07:42.078 00:07:42.078 Accel Perf Configuration: 00:07:42.078 Workload Type: decompress 00:07:42.078 Transfer size: 111250 bytes 00:07:42.078 Vector count 1 00:07:42.078 Module: software 00:07:42.078 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.078 Queue depth: 32 00:07:42.078 Allocate depth: 32 00:07:42.078 # threads/core: 1 00:07:42.078 Run time: 1 seconds 00:07:42.078 Verify: Yes 00:07:42.078 00:07:42.078 Running for 1 seconds... 00:07:42.078 00:07:42.078 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.078 ------------------------------------------------------------------------------------ 00:07:42.078 0,0 3808/s 157 MiB/s 0 0 00:07:42.078 ==================================================================================== 00:07:42.078 Total 3808/s 404 MiB/s 0 0' 00:07:42.078 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.079 16:59:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.079 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.079 16:59:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.079 16:59:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.079 16:59:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.079 16:59:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.079 16:59:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.079 16:59:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.079 16:59:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.079 16:59:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.079 16:59:58 -- accel/accel.sh@42 -- # jq -r . 00:07:42.079 [2024-07-20 16:59:58.055250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:42.079 [2024-07-20 16:59:58.055339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432207 ] 00:07:42.079 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.079 [2024-07-20 16:59:58.117052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.079 [2024-07-20 16:59:58.209860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.336 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.336 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.336 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.336 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.336 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.336 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.336 16:59:58 -- accel/accel.sh@21 -- # val=0x1 00:07:42.336 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.336 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=decompress 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=software 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=32 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=32 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=1 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val=Yes 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:42.337 16:59:58 -- accel/accel.sh@21 -- # val= 00:07:42.337 16:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:42.337 16:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@21 -- # val= 00:07:43.708 16:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@21 -- # val= 00:07:43.708 16:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@21 -- # val= 00:07:43.708 16:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@21 -- # val= 00:07:43.708 16:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@21 -- # val= 00:07:43.708 16:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@21 -- # val= 00:07:43.708 16:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:43.708 16:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:43.708 16:59:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.708 16:59:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.708 16:59:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.708 00:07:43.708 real 0m2.856s 00:07:43.708 user 0m2.559s 00:07:43.708 sys 0m0.290s 00:07:43.708 16:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.708 16:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:43.708 ************************************ 00:07:43.708 END TEST accel_decmop_full 00:07:43.708 ************************************ 00:07:43.708 16:59:59 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.708 16:59:59 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:43.708 16:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.708 16:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:43.708 ************************************ 00:07:43.708 START TEST accel_decomp_mcore 00:07:43.708 ************************************ 00:07:43.708 16:59:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.708 16:59:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.708 16:59:59 -- accel/accel.sh@17 -- # local accel_module 00:07:43.708 16:59:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.708 16:59:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.708 16:59:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.708 16:59:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.708 16:59:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.708 16:59:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.708 16:59:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.708 16:59:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.708 16:59:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.708 16:59:59 -- accel/accel.sh@42 -- # jq -r . 00:07:43.708 [2024-07-20 16:59:59.507808] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:43.708 [2024-07-20 16:59:59.507896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432360 ] 00:07:43.708 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.708 [2024-07-20 16:59:59.569307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.708 [2024-07-20 16:59:59.663905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.708 [2024-07-20 16:59:59.663958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.708 [2024-07-20 16:59:59.664012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.708 [2024-07-20 16:59:59.664015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.083 17:00:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.083 00:07:45.083 SPDK Configuration: 00:07:45.083 Core mask: 0xf 00:07:45.083 00:07:45.083 Accel Perf Configuration: 00:07:45.083 Workload Type: decompress 00:07:45.083 Transfer size: 4096 bytes 00:07:45.083 Vector count 1 00:07:45.083 Module: software 00:07:45.083 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.083 Queue depth: 32 00:07:45.083 Allocate depth: 32 00:07:45.083 # threads/core: 1 00:07:45.083 Run time: 1 seconds 00:07:45.083 Verify: Yes 00:07:45.083 00:07:45.083 Running for 1 seconds... 00:07:45.083 00:07:45.083 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.083 ------------------------------------------------------------------------------------ 00:07:45.083 0,0 54368/s 100 MiB/s 0 0 00:07:45.083 3,0 54432/s 100 MiB/s 0 0 00:07:45.083 2,0 54656/s 100 MiB/s 0 0 00:07:45.083 1,0 54656/s 100 MiB/s 0 0 00:07:45.083 ==================================================================================== 00:07:45.083 Total 218112/s 852 MiB/s 0 0' 00:07:45.083 17:00:00 -- accel/accel.sh@20 -- # IFS=: 00:07:45.083 17:00:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.083 17:00:00 -- accel/accel.sh@20 -- # read -r var val 00:07:45.083 17:00:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.083 17:00:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.083 17:00:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.083 17:00:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.083 17:00:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.083 17:00:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.083 17:00:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.083 17:00:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.083 17:00:00 -- accel/accel.sh@42 -- # jq -r . 00:07:45.083 [2024-07-20 17:00:00.913435] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:45.083 [2024-07-20 17:00:00.913526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432569 ] 00:07:45.083 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.083 [2024-07-20 17:00:00.976639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.083 [2024-07-20 17:00:01.073157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.083 [2024-07-20 17:00:01.073211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.083 [2024-07-20 17:00:01.073263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.083 [2024-07-20 17:00:01.073267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.083 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.083 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.083 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.083 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.083 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.083 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.083 17:00:01 -- accel/accel.sh@21 -- # val=0xf 00:07:45.083 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.083 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.083 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=decompress 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=software 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=32 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=32 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=1 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val=Yes 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:45.084 17:00:01 -- accel/accel.sh@21 -- # val= 00:07:45.084 17:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:45.084 17:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@21 -- # val= 00:07:46.506 17:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:46.506 17:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:46.506 17:00:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.506 17:00:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.506 17:00:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.506 00:07:46.506 real 0m2.807s 00:07:46.506 user 0m9.339s 00:07:46.506 sys 0m0.300s 00:07:46.506 17:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.506 17:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.506 ************************************ 00:07:46.506 END TEST accel_decomp_mcore 00:07:46.506 ************************************ 00:07:46.506 17:00:02 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.506 17:00:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:46.506 17:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.506 17:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:46.506 ************************************ 00:07:46.506 START TEST accel_decomp_full_mcore 00:07:46.506 ************************************ 00:07:46.506 17:00:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.506 17:00:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.506 17:00:02 -- accel/accel.sh@17 -- # local accel_module 00:07:46.506 17:00:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.506 17:00:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.506 17:00:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.506 17:00:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.506 17:00:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.506 17:00:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.506 17:00:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.506 17:00:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.506 17:00:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.506 17:00:02 -- accel/accel.sh@42 -- # jq -r . 00:07:46.506 [2024-07-20 17:00:02.342857] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:46.506 [2024-07-20 17:00:02.342937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid432860 ] 00:07:46.506 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.506 [2024-07-20 17:00:02.407365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.506 [2024-07-20 17:00:02.503819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.506 [2024-07-20 17:00:02.503874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.507 [2024-07-20 17:00:02.503926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.507 [2024-07-20 17:00:02.503929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.877 17:00:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.877 00:07:47.877 SPDK Configuration: 00:07:47.877 Core mask: 0xf 00:07:47.877 00:07:47.877 Accel Perf Configuration: 00:07:47.877 Workload Type: decompress 00:07:47.877 Transfer size: 111250 bytes 00:07:47.877 Vector count 1 00:07:47.877 Module: software 00:07:47.877 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.877 Queue depth: 32 00:07:47.877 Allocate depth: 32 00:07:47.877 # threads/core: 1 00:07:47.877 Run time: 1 seconds 00:07:47.877 Verify: Yes 00:07:47.877 00:07:47.877 Running for 1 seconds... 00:07:47.877 00:07:47.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.877 ------------------------------------------------------------------------------------ 00:07:47.877 0,0 3776/s 155 MiB/s 0 0 00:07:47.877 3,0 3776/s 155 MiB/s 0 0 00:07:47.877 2,0 3776/s 155 MiB/s 0 0 00:07:47.877 1,0 3808/s 157 MiB/s 0 0 00:07:47.877 ==================================================================================== 00:07:47.877 Total 15136/s 1605 MiB/s 0 0' 00:07:47.877 17:00:03 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.877 17:00:03 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.877 17:00:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.877 17:00:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.877 17:00:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.877 17:00:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.877 17:00:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.877 17:00:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.877 17:00:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.877 17:00:03 -- accel/accel.sh@42 -- # jq -r . 00:07:47.877 [2024-07-20 17:00:03.785678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.877 [2024-07-20 17:00:03.785769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433050 ] 00:07:47.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.877 [2024-07-20 17:00:03.850466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.877 [2024-07-20 17:00:03.946678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.877 [2024-07-20 17:00:03.946733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.877 [2024-07-20 17:00:03.946786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.877 [2024-07-20 17:00:03.946789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=0xf 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=decompress 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=software 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=32 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=32 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=1 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val=Yes 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:47.877 17:00:04 -- accel/accel.sh@21 -- # val= 00:07:47.877 17:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:47.877 17:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@21 -- # val= 00:07:49.249 17:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:49.249 17:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:49.249 17:00:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.249 17:00:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:49.249 17:00:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.249 00:07:49.249 real 0m2.889s 00:07:49.249 user 0m9.582s 00:07:49.249 sys 0m0.324s 00:07:49.249 17:00:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.249 17:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:49.249 ************************************ 00:07:49.249 END TEST accel_decomp_full_mcore 00:07:49.249 ************************************ 00:07:49.249 17:00:05 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.249 17:00:05 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:49.249 17:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.249 17:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:49.249 ************************************ 00:07:49.249 START TEST accel_decomp_mthread 00:07:49.249 ************************************ 00:07:49.249 17:00:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.249 17:00:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.249 17:00:05 -- accel/accel.sh@17 -- # local accel_module 00:07:49.249 17:00:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.249 17:00:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.249 17:00:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.249 17:00:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.249 17:00:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.249 17:00:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.249 17:00:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.249 17:00:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.249 17:00:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.249 17:00:05 -- accel/accel.sh@42 -- # jq -r . 00:07:49.249 [2024-07-20 17:00:05.258258] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:49.249 [2024-07-20 17:00:05.258344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433212 ] 00:07:49.249 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.249 [2024-07-20 17:00:05.320662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.506 [2024-07-20 17:00:05.414625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.894 17:00:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.894 00:07:50.894 SPDK Configuration: 00:07:50.894 Core mask: 0x1 00:07:50.894 00:07:50.894 Accel Perf Configuration: 00:07:50.894 Workload Type: decompress 00:07:50.894 Transfer size: 4096 bytes 00:07:50.894 Vector count 1 00:07:50.894 Module: software 00:07:50.894 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.894 Queue depth: 32 00:07:50.894 Allocate depth: 32 00:07:50.894 # threads/core: 2 00:07:50.894 Run time: 1 seconds 00:07:50.894 Verify: Yes 00:07:50.894 00:07:50.894 Running for 1 seconds... 00:07:50.894 00:07:50.894 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.894 ------------------------------------------------------------------------------------ 00:07:50.894 0,1 28128/s 51 MiB/s 0 0 00:07:50.894 0,0 28000/s 51 MiB/s 0 0 00:07:50.894 ==================================================================================== 00:07:50.894 Total 56128/s 219 MiB/s 0 0' 00:07:50.894 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.894 17:00:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.894 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.894 17:00:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:50.894 17:00:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.894 17:00:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.894 17:00:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.894 17:00:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.894 17:00:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.894 17:00:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.894 17:00:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.894 17:00:06 -- accel/accel.sh@42 -- # jq -r . 00:07:50.894 [2024-07-20 17:00:06.679165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.894 [2024-07-20 17:00:06.679254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433352 ] 00:07:50.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.894 [2024-07-20 17:00:06.743192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.894 [2024-07-20 17:00:06.838490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.894 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.894 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.894 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.894 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.894 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=0x1 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=decompress 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=software 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=32 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=32 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=2 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val=Yes 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:50.895 17:00:06 -- accel/accel.sh@21 -- # val= 00:07:50.895 17:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # IFS=: 00:07:50.895 17:00:06 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@21 -- # val= 00:07:52.266 17:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:52.266 17:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:52.266 17:00:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.266 17:00:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.266 17:00:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.266 00:07:52.266 real 0m2.830s 00:07:52.266 user 0m2.531s 00:07:52.266 sys 0m0.293s 00:07:52.266 17:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.266 17:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:52.266 ************************************ 00:07:52.266 END TEST accel_decomp_mthread 00:07:52.266 ************************************ 00:07:52.266 17:00:08 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.266 17:00:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:52.266 17:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.266 17:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:52.266 ************************************ 00:07:52.266 START TEST accel_deomp_full_mthread 00:07:52.266 ************************************ 00:07:52.266 17:00:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.266 17:00:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.267 17:00:08 -- accel/accel.sh@17 -- # local accel_module 00:07:52.267 17:00:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.267 17:00:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.267 17:00:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.267 17:00:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.267 17:00:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.267 17:00:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.267 17:00:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.267 17:00:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.267 17:00:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.267 17:00:08 -- accel/accel.sh@42 -- # jq -r . 00:07:52.267 [2024-07-20 17:00:08.109690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:52.267 [2024-07-20 17:00:08.109774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433956 ] 00:07:52.267 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.267 [2024-07-20 17:00:08.168485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.267 [2024-07-20 17:00:08.261722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.636 17:00:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.636 00:07:53.636 SPDK Configuration: 00:07:53.636 Core mask: 0x1 00:07:53.636 00:07:53.636 Accel Perf Configuration: 00:07:53.636 Workload Type: decompress 00:07:53.636 Transfer size: 111250 bytes 00:07:53.636 Vector count 1 00:07:53.636 Module: software 00:07:53.636 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.636 Queue depth: 32 00:07:53.636 Allocate depth: 32 00:07:53.636 # threads/core: 2 00:07:53.636 Run time: 1 seconds 00:07:53.636 Verify: Yes 00:07:53.636 00:07:53.636 Running for 1 seconds... 00:07:53.636 00:07:53.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.636 ------------------------------------------------------------------------------------ 00:07:53.636 0,1 1952/s 80 MiB/s 0 0 00:07:53.636 0,0 1920/s 79 MiB/s 0 0 00:07:53.636 ==================================================================================== 00:07:53.636 Total 3872/s 410 MiB/s 0 0' 00:07:53.636 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.636 17:00:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.636 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.636 17:00:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:53.636 17:00:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.636 17:00:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.636 17:00:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.636 17:00:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.636 17:00:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.636 17:00:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.636 17:00:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.636 17:00:09 -- accel/accel.sh@42 -- # jq -r . 00:07:53.636 [2024-07-20 17:00:09.547719] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.636 [2024-07-20 17:00:09.547817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434287 ] 00:07:53.636 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.636 [2024-07-20 17:00:09.610005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.637 [2024-07-20 17:00:09.703011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=0x1 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=decompress 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=software 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=32 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=32 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=2 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val=Yes 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:53.637 17:00:09 -- accel/accel.sh@21 -- # val= 00:07:53.637 17:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:53.637 17:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@21 -- # val= 00:07:55.018 17:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:55.018 17:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:55.018 17:00:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.018 17:00:10 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.019 17:00:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.019 00:07:55.019 real 0m2.890s 00:07:55.019 user 0m2.591s 00:07:55.019 sys 0m0.292s 00:07:55.019 17:00:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.019 17:00:10 -- common/autotest_common.sh@10 -- # set +x 00:07:55.019 ************************************ 00:07:55.019 END TEST accel_deomp_full_mthread 00:07:55.019 ************************************ 00:07:55.019 17:00:11 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:55.019 17:00:11 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.019 17:00:11 -- accel/accel.sh@129 -- # build_accel_config 00:07:55.019 17:00:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.019 17:00:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:55.019 17:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.019 17:00:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.019 17:00:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.019 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.019 17:00:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.019 17:00:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.019 17:00:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.019 17:00:11 -- accel/accel.sh@42 -- # jq -r . 00:07:55.019 ************************************ 00:07:55.019 START TEST accel_dif_functional_tests 00:07:55.019 ************************************ 00:07:55.019 17:00:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.019 [2024-07-20 17:00:11.048428] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:55.019 [2024-07-20 17:00:11.048506] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434447 ] 00:07:55.019 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.019 [2024-07-20 17:00:11.110617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.276 [2024-07-20 17:00:11.207417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.276 [2024-07-20 17:00:11.207471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.276 [2024-07-20 17:00:11.207474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.276 00:07:55.276 00:07:55.276 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.276 http://cunit.sourceforge.net/ 00:07:55.276 00:07:55.276 00:07:55.276 Suite: accel_dif 00:07:55.276 Test: verify: DIF generated, GUARD check ...passed 00:07:55.276 Test: verify: DIF generated, APPTAG check ...passed 00:07:55.276 Test: verify: DIF generated, REFTAG check ...passed 00:07:55.276 Test: verify: DIF not generated, GUARD check ...[2024-07-20 17:00:11.301408] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.276 [2024-07-20 17:00:11.301478] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.276 passed 00:07:55.277 Test: verify: DIF not generated, APPTAG check ...[2024-07-20 17:00:11.301520] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.277 [2024-07-20 17:00:11.301558] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.277 passed 00:07:55.277 Test: verify: DIF not generated, REFTAG check ...[2024-07-20 17:00:11.301596] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.277 [2024-07-20 17:00:11.301625] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.277 passed 00:07:55.277 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:55.277 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-20 17:00:11.301695] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:55.277 passed 00:07:55.277 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:55.277 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:55.277 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:55.277 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-20 17:00:11.301877] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:55.277 passed 00:07:55.277 Test: generate copy: DIF generated, GUARD check ...passed 00:07:55.277 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:55.277 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:55.277 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:55.277 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:55.277 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:55.277 Test: generate copy: iovecs-len validate ...[2024-07-20 17:00:11.302195] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:55.277 passed 00:07:55.277 Test: generate copy: buffer alignment validate ...passed 00:07:55.277 00:07:55.277 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.277 suites 1 1 n/a 0 0 00:07:55.277 tests 20 20 20 0 0 00:07:55.277 asserts 204 204 204 0 n/a 00:07:55.277 00:07:55.277 Elapsed time = 0.003 seconds 00:07:55.534 00:07:55.534 real 0m0.503s 00:07:55.534 user 0m0.780s 00:07:55.534 sys 0m0.180s 00:07:55.534 17:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.534 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.534 ************************************ 00:07:55.534 END TEST accel_dif_functional_tests 00:07:55.534 ************************************ 00:07:55.534 00:07:55.534 real 0m59.752s 00:07:55.534 user 1m7.577s 00:07:55.534 sys 0m7.143s 00:07:55.534 17:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.534 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.534 ************************************ 00:07:55.534 END TEST accel 00:07:55.534 ************************************ 00:07:55.534 17:00:11 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:55.534 17:00:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.534 17:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.534 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.534 ************************************ 00:07:55.534 START TEST accel_rpc 00:07:55.534 ************************************ 00:07:55.534 17:00:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:55.534 * Looking for test storage... 00:07:55.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:55.534 17:00:11 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:55.534 17:00:11 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=434635 00:07:55.534 17:00:11 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:55.534 17:00:11 -- accel/accel_rpc.sh@15 -- # waitforlisten 434635 00:07:55.534 17:00:11 -- common/autotest_common.sh@819 -- # '[' -z 434635 ']' 00:07:55.534 17:00:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.534 17:00:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:55.534 17:00:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.534 17:00:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:55.534 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.534 [2024-07-20 17:00:11.658658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:55.534 [2024-07-20 17:00:11.658735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434635 ] 00:07:55.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.792 [2024-07-20 17:00:11.719753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.792 [2024-07-20 17:00:11.808375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.792 [2024-07-20 17:00:11.808573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.792 17:00:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:55.792 17:00:11 -- common/autotest_common.sh@852 -- # return 0 00:07:55.792 17:00:11 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:55.792 17:00:11 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:55.792 17:00:11 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:55.792 17:00:11 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:55.792 17:00:11 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:55.792 17:00:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.792 17:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.792 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.792 ************************************ 00:07:55.792 START TEST accel_assign_opcode 00:07:55.792 ************************************ 00:07:55.792 17:00:11 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:55.792 17:00:11 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:55.792 17:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.792 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.793 [2024-07-20 17:00:11.857128] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:55.793 17:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.793 17:00:11 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:55.793 17:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.793 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:55.793 [2024-07-20 17:00:11.865136] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:55.793 17:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.793 17:00:11 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:55.793 17:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.793 17:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:56.050 17:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.050 17:00:12 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:56.050 17:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.050 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.050 17:00:12 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:56.050 17:00:12 -- accel/accel_rpc.sh@42 -- # grep software 00:07:56.050 17:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.050 software 00:07:56.050 00:07:56.050 real 0m0.296s 00:07:56.050 user 0m0.042s 00:07:56.050 sys 0m0.005s 00:07:56.050 17:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.050 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.050 ************************************ 00:07:56.050 END TEST accel_assign_opcode 00:07:56.050 ************************************ 00:07:56.050 17:00:12 -- accel/accel_rpc.sh@55 -- # killprocess 434635 00:07:56.050 17:00:12 -- common/autotest_common.sh@926 -- # '[' -z 434635 ']' 00:07:56.050 17:00:12 -- common/autotest_common.sh@930 -- # kill -0 434635 00:07:56.050 17:00:12 -- common/autotest_common.sh@931 -- # uname 00:07:56.050 17:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:56.050 17:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434635 00:07:56.050 17:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:56.050 17:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:56.050 17:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434635' 00:07:56.050 killing process with pid 434635 00:07:56.050 17:00:12 -- common/autotest_common.sh@945 -- # kill 434635 00:07:56.050 17:00:12 -- common/autotest_common.sh@950 -- # wait 434635 00:07:56.614 00:07:56.614 real 0m1.042s 00:07:56.614 user 0m0.961s 00:07:56.614 sys 0m0.401s 00:07:56.614 17:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.614 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.614 ************************************ 00:07:56.614 END TEST accel_rpc 00:07:56.614 ************************************ 00:07:56.614 17:00:12 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:56.614 17:00:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.614 17:00:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.614 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.614 ************************************ 00:07:56.614 START TEST app_cmdline 00:07:56.614 ************************************ 00:07:56.614 17:00:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:56.614 * Looking for test storage... 00:07:56.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:56.614 17:00:12 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:56.614 17:00:12 -- app/cmdline.sh@17 -- # spdk_tgt_pid=434838 00:07:56.614 17:00:12 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:56.614 17:00:12 -- app/cmdline.sh@18 -- # waitforlisten 434838 00:07:56.614 17:00:12 -- common/autotest_common.sh@819 -- # '[' -z 434838 ']' 00:07:56.614 17:00:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.614 17:00:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.614 17:00:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.614 17:00:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.614 17:00:12 -- common/autotest_common.sh@10 -- # set +x 00:07:56.614 [2024-07-20 17:00:12.725400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:56.614 [2024-07-20 17:00:12.725484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434838 ] 00:07:56.614 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.872 [2024-07-20 17:00:12.782727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.872 [2024-07-20 17:00:12.865158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.872 [2024-07-20 17:00:12.865314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.804 17:00:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.804 17:00:13 -- common/autotest_common.sh@852 -- # return 0 00:07:57.804 17:00:13 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:57.804 { 00:07:57.804 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:57.804 "fields": { 00:07:57.804 "major": 24, 00:07:57.804 "minor": 1, 00:07:57.804 "patch": 1, 00:07:57.804 "suffix": "-pre", 00:07:57.804 "commit": "4b94202c6" 00:07:57.804 } 00:07:57.804 } 00:07:57.804 17:00:13 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:57.804 17:00:13 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:57.804 17:00:13 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:57.804 17:00:13 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:57.804 17:00:13 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:57.804 17:00:13 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:57.804 17:00:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.804 17:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:57.804 17:00:13 -- app/cmdline.sh@26 -- # sort 00:07:58.064 17:00:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.064 17:00:13 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:58.064 17:00:13 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:58.064 17:00:13 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.064 17:00:13 -- common/autotest_common.sh@640 -- # local es=0 00:07:58.064 17:00:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.064 17:00:13 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.064 17:00:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.064 17:00:13 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.064 17:00:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.064 17:00:14 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.064 17:00:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.064 17:00:14 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.064 17:00:14 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:58.064 17:00:14 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.064 request: 00:07:58.064 { 00:07:58.064 "method": "env_dpdk_get_mem_stats", 00:07:58.064 "req_id": 1 00:07:58.064 } 00:07:58.064 Got JSON-RPC error response 00:07:58.064 response: 00:07:58.064 { 00:07:58.064 "code": -32601, 00:07:58.064 "message": "Method not found" 00:07:58.064 } 00:07:58.322 17:00:14 -- common/autotest_common.sh@643 -- # es=1 00:07:58.322 17:00:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:58.322 17:00:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:58.322 17:00:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:58.322 17:00:14 -- app/cmdline.sh@1 -- # killprocess 434838 00:07:58.322 17:00:14 -- common/autotest_common.sh@926 -- # '[' -z 434838 ']' 00:07:58.322 17:00:14 -- common/autotest_common.sh@930 -- # kill -0 434838 00:07:58.322 17:00:14 -- common/autotest_common.sh@931 -- # uname 00:07:58.322 17:00:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.322 17:00:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 434838 00:07:58.322 17:00:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.322 17:00:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.322 17:00:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 434838' 00:07:58.322 killing process with pid 434838 00:07:58.322 17:00:14 -- common/autotest_common.sh@945 -- # kill 434838 00:07:58.322 17:00:14 -- common/autotest_common.sh@950 -- # wait 434838 00:07:58.581 00:07:58.581 real 0m2.058s 00:07:58.581 user 0m2.614s 00:07:58.581 sys 0m0.496s 00:07:58.581 17:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.581 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.581 ************************************ 00:07:58.581 END TEST app_cmdline 00:07:58.581 ************************************ 00:07:58.581 17:00:14 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:58.581 17:00:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.581 17:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.581 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.581 ************************************ 00:07:58.581 START TEST version 00:07:58.581 ************************************ 00:07:58.581 17:00:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:58.839 * Looking for test storage... 00:07:58.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:58.839 17:00:14 -- app/version.sh@17 -- # get_header_version major 00:07:58.839 17:00:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.839 17:00:14 -- app/version.sh@14 -- # cut -f2 00:07:58.839 17:00:14 -- app/version.sh@14 -- # tr -d '"' 00:07:58.839 17:00:14 -- app/version.sh@17 -- # major=24 00:07:58.839 17:00:14 -- app/version.sh@18 -- # get_header_version minor 00:07:58.839 17:00:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.839 17:00:14 -- app/version.sh@14 -- # cut -f2 00:07:58.839 17:00:14 -- app/version.sh@14 -- # tr -d '"' 00:07:58.839 17:00:14 -- app/version.sh@18 -- # minor=1 00:07:58.839 17:00:14 -- app/version.sh@19 -- # get_header_version patch 00:07:58.839 17:00:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.839 17:00:14 -- app/version.sh@14 -- # cut -f2 00:07:58.839 17:00:14 -- app/version.sh@14 -- # tr -d '"' 00:07:58.839 17:00:14 -- app/version.sh@19 -- # patch=1 00:07:58.839 17:00:14 -- app/version.sh@20 -- # get_header_version suffix 00:07:58.839 17:00:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:58.839 17:00:14 -- app/version.sh@14 -- # cut -f2 00:07:58.839 17:00:14 -- app/version.sh@14 -- # tr -d '"' 00:07:58.839 17:00:14 -- app/version.sh@20 -- # suffix=-pre 00:07:58.839 17:00:14 -- app/version.sh@22 -- # version=24.1 00:07:58.839 17:00:14 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:58.839 17:00:14 -- app/version.sh@25 -- # version=24.1.1 00:07:58.839 17:00:14 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:58.839 17:00:14 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:58.839 17:00:14 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:58.839 17:00:14 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:58.839 17:00:14 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:58.839 00:07:58.839 real 0m0.104s 00:07:58.839 user 0m0.058s 00:07:58.839 sys 0m0.067s 00:07:58.839 17:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.839 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.840 ************************************ 00:07:58.840 END TEST version 00:07:58.840 ************************************ 00:07:58.840 17:00:14 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@204 -- # uname -s 00:07:58.840 17:00:14 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:58.840 17:00:14 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:58.840 17:00:14 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:58.840 17:00:14 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:58.840 17:00:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:58.840 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.840 17:00:14 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:58.840 17:00:14 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:58.840 17:00:14 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.840 17:00:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.840 17:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.840 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.840 ************************************ 00:07:58.840 START TEST nvmf_tcp 00:07:58.840 ************************************ 00:07:58.840 17:00:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.840 * Looking for test storage... 00:07:58.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.840 17:00:14 -- nvmf/common.sh@7 -- # uname -s 00:07:58.840 17:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.840 17:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.840 17:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.840 17:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.840 17:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.840 17:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.840 17:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.840 17:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.840 17:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.840 17:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.840 17:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.840 17:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.840 17:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.840 17:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.840 17:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.840 17:00:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.840 17:00:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.840 17:00:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.840 17:00:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.840 17:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- paths/export.sh@5 -- # export PATH 00:07:58.840 17:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- nvmf/common.sh@46 -- # : 0 00:07:58.840 17:00:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.840 17:00:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.840 17:00:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.840 17:00:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.840 17:00:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.840 17:00:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:58.840 17:00:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.840 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:58.840 17:00:14 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.840 17:00:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.840 17:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.840 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.840 ************************************ 00:07:58.840 START TEST nvmf_example 00:07:58.840 ************************************ 00:07:58.840 17:00:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.840 * Looking for test storage... 00:07:58.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.840 17:00:14 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.840 17:00:14 -- nvmf/common.sh@7 -- # uname -s 00:07:58.840 17:00:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.840 17:00:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.840 17:00:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.840 17:00:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.840 17:00:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.840 17:00:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.840 17:00:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.840 17:00:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.840 17:00:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.840 17:00:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.840 17:00:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.840 17:00:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.840 17:00:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.840 17:00:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.840 17:00:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.840 17:00:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.840 17:00:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.840 17:00:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.840 17:00:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.840 17:00:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- paths/export.sh@5 -- # export PATH 00:07:58.840 17:00:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.840 17:00:14 -- nvmf/common.sh@46 -- # : 0 00:07:58.840 17:00:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.840 17:00:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.840 17:00:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.840 17:00:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.840 17:00:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.840 17:00:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.840 17:00:14 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:58.840 17:00:14 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:58.840 17:00:14 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:58.840 17:00:14 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:58.840 17:00:14 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:58.840 17:00:14 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:58.840 17:00:14 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:58.840 17:00:14 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:58.840 17:00:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.840 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.840 17:00:14 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:58.841 17:00:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:58.841 17:00:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.841 17:00:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:58.841 17:00:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:58.841 17:00:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:58.841 17:00:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.841 17:00:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.841 17:00:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.841 17:00:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:58.841 17:00:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:58.841 17:00:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:58.841 17:00:14 -- common/autotest_common.sh@10 -- # set +x 00:08:01.368 17:00:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:01.368 17:00:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:01.368 17:00:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:01.368 17:00:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:01.368 17:00:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:01.368 17:00:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:01.368 17:00:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:01.368 17:00:17 -- nvmf/common.sh@294 -- # net_devs=() 00:08:01.368 17:00:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:01.368 17:00:17 -- nvmf/common.sh@295 -- # e810=() 00:08:01.368 17:00:17 -- nvmf/common.sh@295 -- # local -ga e810 00:08:01.368 17:00:17 -- nvmf/common.sh@296 -- # x722=() 00:08:01.368 17:00:17 -- nvmf/common.sh@296 -- # local -ga x722 00:08:01.368 17:00:17 -- nvmf/common.sh@297 -- # mlx=() 00:08:01.368 17:00:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:01.368 17:00:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.368 17:00:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:01.368 17:00:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:01.368 17:00:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:01.368 17:00:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.368 17:00:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.368 17:00:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.368 17:00:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.368 17:00:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:01.368 17:00:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.368 17:00:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.368 17:00:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.368 17:00:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.368 17:00:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.368 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.368 17:00:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.368 17:00:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.368 17:00:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.368 17:00:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.368 17:00:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.368 17:00:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.368 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.368 17:00:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.368 17:00:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:01.368 17:00:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:01.368 17:00:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:01.368 17:00:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:01.368 17:00:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.368 17:00:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.368 17:00:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.368 17:00:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:01.368 17:00:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.368 17:00:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.368 17:00:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:01.368 17:00:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.368 17:00:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.368 17:00:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:01.368 17:00:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:01.368 17:00:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.368 17:00:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.368 17:00:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.368 17:00:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.369 17:00:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:01.369 17:00:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.369 17:00:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.369 17:00:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.369 17:00:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:01.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:08:01.369 00:08:01.369 --- 10.0.0.2 ping statistics --- 00:08:01.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.369 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:01.369 17:00:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:08:01.369 00:08:01.369 --- 10.0.0.1 ping statistics --- 00:08:01.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.369 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:08:01.369 17:00:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.369 17:00:17 -- nvmf/common.sh@410 -- # return 0 00:08:01.369 17:00:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:01.369 17:00:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.369 17:00:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:01.369 17:00:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:01.369 17:00:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.369 17:00:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:01.369 17:00:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:01.369 17:00:17 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:01.369 17:00:17 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:01.369 17:00:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:01.369 17:00:17 -- common/autotest_common.sh@10 -- # set +x 00:08:01.369 17:00:17 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:01.369 17:00:17 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:01.369 17:00:17 -- target/nvmf_example.sh@34 -- # nvmfpid=436870 00:08:01.369 17:00:17 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:01.369 17:00:17 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.369 17:00:17 -- target/nvmf_example.sh@36 -- # waitforlisten 436870 00:08:01.369 17:00:17 -- common/autotest_common.sh@819 -- # '[' -z 436870 ']' 00:08:01.369 17:00:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.369 17:00:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.369 17:00:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.369 17:00:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.369 17:00:17 -- common/autotest_common.sh@10 -- # set +x 00:08:01.369 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.299 17:00:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.299 17:00:18 -- common/autotest_common.sh@852 -- # return 0 00:08:02.299 17:00:18 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:02.299 17:00:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:02.299 17:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.299 17:00:18 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.299 17:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.299 17:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.299 17:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.299 17:00:18 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.299 17:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.299 17:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.299 17:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.299 17:00:18 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:02.299 17:00:18 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.299 17:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.299 17:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.299 17:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.299 17:00:18 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:02.299 17:00:18 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.299 17:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.299 17:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.299 17:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.299 17:00:18 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.299 17:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.299 17:00:18 -- common/autotest_common.sh@10 -- # set +x 00:08:02.299 17:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.299 17:00:18 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:02.299 17:00:18 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:02.299 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.482 Initializing NVMe Controllers 00:08:14.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.482 Initialization complete. Launching workers. 00:08:14.482 ======================================================== 00:08:14.482 Latency(us) 00:08:14.482 Device Information : IOPS MiB/s Average min max 00:08:14.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13570.80 53.01 4715.98 878.78 15286.55 00:08:14.482 ======================================================== 00:08:14.482 Total : 13570.80 53.01 4715.98 878.78 15286.55 00:08:14.482 00:08:14.482 17:00:28 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:14.482 17:00:28 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:14.482 17:00:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:14.482 17:00:28 -- nvmf/common.sh@116 -- # sync 00:08:14.482 17:00:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.482 17:00:28 -- nvmf/common.sh@119 -- # set +e 00:08:14.482 17:00:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.482 17:00:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.482 rmmod nvme_tcp 00:08:14.482 rmmod nvme_fabrics 00:08:14.482 rmmod nvme_keyring 00:08:14.482 17:00:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.482 17:00:28 -- nvmf/common.sh@123 -- # set -e 00:08:14.482 17:00:28 -- nvmf/common.sh@124 -- # return 0 00:08:14.482 17:00:28 -- nvmf/common.sh@477 -- # '[' -n 436870 ']' 00:08:14.482 17:00:28 -- nvmf/common.sh@478 -- # killprocess 436870 00:08:14.482 17:00:28 -- common/autotest_common.sh@926 -- # '[' -z 436870 ']' 00:08:14.482 17:00:28 -- common/autotest_common.sh@930 -- # kill -0 436870 00:08:14.482 17:00:28 -- common/autotest_common.sh@931 -- # uname 00:08:14.482 17:00:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:14.482 17:00:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 436870 00:08:14.482 17:00:28 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:14.482 17:00:28 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:14.482 17:00:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 436870' 00:08:14.482 killing process with pid 436870 00:08:14.482 17:00:28 -- common/autotest_common.sh@945 -- # kill 436870 00:08:14.482 17:00:28 -- common/autotest_common.sh@950 -- # wait 436870 00:08:14.482 nvmf threads initialize successfully 00:08:14.482 bdev subsystem init successfully 00:08:14.482 created a nvmf target service 00:08:14.482 create targets's poll groups done 00:08:14.482 all subsystems of target started 00:08:14.482 nvmf target is running 00:08:14.482 all subsystems of target stopped 00:08:14.482 destroy targets's poll groups done 00:08:14.482 destroyed the nvmf target service 00:08:14.482 bdev subsystem finish successfully 00:08:14.482 nvmf threads destroy successfully 00:08:14.482 17:00:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:14.482 17:00:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:14.482 17:00:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:14.482 17:00:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.482 17:00:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:14.482 17:00:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.482 17:00:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.482 17:00:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.057 17:00:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:15.057 17:00:30 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:15.057 17:00:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.057 17:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:15.057 00:08:15.057 real 0m16.042s 00:08:15.057 user 0m45.601s 00:08:15.057 sys 0m3.218s 00:08:15.057 17:00:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.057 17:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:15.057 ************************************ 00:08:15.057 END TEST nvmf_example 00:08:15.057 ************************************ 00:08:15.057 17:00:30 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.057 17:00:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:15.057 17:00:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.057 17:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:15.057 ************************************ 00:08:15.057 START TEST nvmf_filesystem 00:08:15.057 ************************************ 00:08:15.057 17:00:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.057 * Looking for test storage... 00:08:15.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.057 17:00:31 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:15.057 17:00:31 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:15.057 17:00:31 -- common/autotest_common.sh@34 -- # set -e 00:08:15.057 17:00:31 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:15.057 17:00:31 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:15.057 17:00:31 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:15.057 17:00:31 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:15.057 17:00:31 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:15.057 17:00:31 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:15.057 17:00:31 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:15.057 17:00:31 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:15.057 17:00:31 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:15.057 17:00:31 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:15.057 17:00:31 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:15.057 17:00:31 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:15.057 17:00:31 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:15.057 17:00:31 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:15.057 17:00:31 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:15.057 17:00:31 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:15.057 17:00:31 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:15.057 17:00:31 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:15.057 17:00:31 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:15.057 17:00:31 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:15.057 17:00:31 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:15.057 17:00:31 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:15.057 17:00:31 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.057 17:00:31 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:15.057 17:00:31 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:15.057 17:00:31 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:15.057 17:00:31 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:15.057 17:00:31 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:15.057 17:00:31 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:15.057 17:00:31 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:15.057 17:00:31 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:15.057 17:00:31 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:15.057 17:00:31 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:15.057 17:00:31 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:15.057 17:00:31 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:15.057 17:00:31 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:15.057 17:00:31 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:15.057 17:00:31 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:15.057 17:00:31 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:15.057 17:00:31 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.057 17:00:31 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:15.057 17:00:31 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:15.057 17:00:31 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:15.057 17:00:31 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:15.057 17:00:31 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.057 17:00:31 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:15.057 17:00:31 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:15.057 17:00:31 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:15.057 17:00:31 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:15.057 17:00:31 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:15.057 17:00:31 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:15.057 17:00:31 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:15.057 17:00:31 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:15.057 17:00:31 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:15.057 17:00:31 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:15.057 17:00:31 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:15.057 17:00:31 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:15.057 17:00:31 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:15.057 17:00:31 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:15.057 17:00:31 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:15.057 17:00:31 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:15.057 17:00:31 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:15.057 17:00:31 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:15.057 17:00:31 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:15.057 17:00:31 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.057 17:00:31 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:15.057 17:00:31 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:15.057 17:00:31 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:15.057 17:00:31 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:15.057 17:00:31 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:15.057 17:00:31 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:15.057 17:00:31 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:15.057 17:00:31 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:15.057 17:00:31 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:15.057 17:00:31 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:15.057 17:00:31 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:15.057 17:00:31 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:15.057 17:00:31 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:15.057 17:00:31 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:15.057 17:00:31 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:15.057 17:00:31 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:15.057 17:00:31 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:15.057 17:00:31 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:15.057 17:00:31 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.057 17:00:31 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.057 17:00:31 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.058 17:00:31 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.058 17:00:31 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.058 17:00:31 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.058 17:00:31 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:15.058 17:00:31 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.058 17:00:31 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:15.058 17:00:31 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:15.058 17:00:31 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:15.058 17:00:31 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:15.058 17:00:31 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:15.058 17:00:31 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:15.058 17:00:31 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:15.058 17:00:31 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:15.058 #define SPDK_CONFIG_H 00:08:15.058 #define SPDK_CONFIG_APPS 1 00:08:15.058 #define SPDK_CONFIG_ARCH native 00:08:15.058 #undef SPDK_CONFIG_ASAN 00:08:15.058 #undef SPDK_CONFIG_AVAHI 00:08:15.058 #undef SPDK_CONFIG_CET 00:08:15.058 #define SPDK_CONFIG_COVERAGE 1 00:08:15.058 #define SPDK_CONFIG_CROSS_PREFIX 00:08:15.058 #undef SPDK_CONFIG_CRYPTO 00:08:15.058 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:15.058 #undef SPDK_CONFIG_CUSTOMOCF 00:08:15.058 #undef SPDK_CONFIG_DAOS 00:08:15.058 #define SPDK_CONFIG_DAOS_DIR 00:08:15.058 #define SPDK_CONFIG_DEBUG 1 00:08:15.058 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:15.058 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.058 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.058 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.058 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:15.058 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.058 #define SPDK_CONFIG_EXAMPLES 1 00:08:15.058 #undef SPDK_CONFIG_FC 00:08:15.058 #define SPDK_CONFIG_FC_PATH 00:08:15.058 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:15.058 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:15.058 #undef SPDK_CONFIG_FUSE 00:08:15.058 #undef SPDK_CONFIG_FUZZER 00:08:15.058 #define SPDK_CONFIG_FUZZER_LIB 00:08:15.058 #undef SPDK_CONFIG_GOLANG 00:08:15.058 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:15.058 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:15.058 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:15.058 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:15.058 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:15.058 #define SPDK_CONFIG_IDXD 1 00:08:15.058 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:15.058 #undef SPDK_CONFIG_IPSEC_MB 00:08:15.058 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:15.058 #define SPDK_CONFIG_ISAL 1 00:08:15.058 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:15.058 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:15.058 #define SPDK_CONFIG_LIBDIR 00:08:15.058 #undef SPDK_CONFIG_LTO 00:08:15.058 #define SPDK_CONFIG_MAX_LCORES 00:08:15.058 #define SPDK_CONFIG_NVME_CUSE 1 00:08:15.058 #undef SPDK_CONFIG_OCF 00:08:15.058 #define SPDK_CONFIG_OCF_PATH 00:08:15.058 #define SPDK_CONFIG_OPENSSL_PATH 00:08:15.058 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:15.058 #undef SPDK_CONFIG_PGO_USE 00:08:15.058 #define SPDK_CONFIG_PREFIX /usr/local 00:08:15.058 #undef SPDK_CONFIG_RAID5F 00:08:15.058 #undef SPDK_CONFIG_RBD 00:08:15.058 #define SPDK_CONFIG_RDMA 1 00:08:15.058 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:15.058 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:15.058 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:15.058 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:15.058 #define SPDK_CONFIG_SHARED 1 00:08:15.058 #undef SPDK_CONFIG_SMA 00:08:15.058 #define SPDK_CONFIG_TESTS 1 00:08:15.058 #undef SPDK_CONFIG_TSAN 00:08:15.058 #define SPDK_CONFIG_UBLK 1 00:08:15.058 #define SPDK_CONFIG_UBSAN 1 00:08:15.058 #undef SPDK_CONFIG_UNIT_TESTS 00:08:15.058 #undef SPDK_CONFIG_URING 00:08:15.058 #define SPDK_CONFIG_URING_PATH 00:08:15.058 #undef SPDK_CONFIG_URING_ZNS 00:08:15.058 #undef SPDK_CONFIG_USDT 00:08:15.058 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:15.058 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:15.058 #define SPDK_CONFIG_VFIO_USER 1 00:08:15.058 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:15.058 #define SPDK_CONFIG_VHOST 1 00:08:15.058 #define SPDK_CONFIG_VIRTIO 1 00:08:15.058 #undef SPDK_CONFIG_VTUNE 00:08:15.058 #define SPDK_CONFIG_VTUNE_DIR 00:08:15.058 #define SPDK_CONFIG_WERROR 1 00:08:15.058 #define SPDK_CONFIG_WPDK_DIR 00:08:15.058 #undef SPDK_CONFIG_XNVME 00:08:15.058 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:15.058 17:00:31 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:15.058 17:00:31 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.058 17:00:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.058 17:00:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.058 17:00:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.058 17:00:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.058 17:00:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.058 17:00:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.058 17:00:31 -- paths/export.sh@5 -- # export PATH 00:08:15.058 17:00:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.058 17:00:31 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.058 17:00:31 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.058 17:00:31 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.058 17:00:31 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.058 17:00:31 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:15.058 17:00:31 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.058 17:00:31 -- pm/common@16 -- # TEST_TAG=N/A 00:08:15.058 17:00:31 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:15.058 17:00:31 -- common/autotest_common.sh@52 -- # : 1 00:08:15.058 17:00:31 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:15.058 17:00:31 -- common/autotest_common.sh@56 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:15.058 17:00:31 -- common/autotest_common.sh@58 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:15.058 17:00:31 -- common/autotest_common.sh@60 -- # : 1 00:08:15.058 17:00:31 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:15.058 17:00:31 -- common/autotest_common.sh@62 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:15.058 17:00:31 -- common/autotest_common.sh@64 -- # : 00:08:15.058 17:00:31 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:15.058 17:00:31 -- common/autotest_common.sh@66 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:15.058 17:00:31 -- common/autotest_common.sh@68 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:15.058 17:00:31 -- common/autotest_common.sh@70 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:15.058 17:00:31 -- common/autotest_common.sh@72 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:15.058 17:00:31 -- common/autotest_common.sh@74 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:15.058 17:00:31 -- common/autotest_common.sh@76 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:15.058 17:00:31 -- common/autotest_common.sh@78 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:15.058 17:00:31 -- common/autotest_common.sh@80 -- # : 1 00:08:15.058 17:00:31 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:15.058 17:00:31 -- common/autotest_common.sh@82 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:15.058 17:00:31 -- common/autotest_common.sh@84 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:15.058 17:00:31 -- common/autotest_common.sh@86 -- # : 1 00:08:15.058 17:00:31 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:15.058 17:00:31 -- common/autotest_common.sh@88 -- # : 1 00:08:15.058 17:00:31 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:15.058 17:00:31 -- common/autotest_common.sh@90 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:15.058 17:00:31 -- common/autotest_common.sh@92 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:15.058 17:00:31 -- common/autotest_common.sh@94 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:15.058 17:00:31 -- common/autotest_common.sh@96 -- # : tcp 00:08:15.058 17:00:31 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:15.058 17:00:31 -- common/autotest_common.sh@98 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:15.058 17:00:31 -- common/autotest_common.sh@100 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:15.058 17:00:31 -- common/autotest_common.sh@102 -- # : 0 00:08:15.058 17:00:31 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:15.059 17:00:31 -- common/autotest_common.sh@104 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:15.059 17:00:31 -- common/autotest_common.sh@106 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:15.059 17:00:31 -- common/autotest_common.sh@108 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:15.059 17:00:31 -- common/autotest_common.sh@110 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:15.059 17:00:31 -- common/autotest_common.sh@112 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:15.059 17:00:31 -- common/autotest_common.sh@114 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:15.059 17:00:31 -- common/autotest_common.sh@116 -- # : 1 00:08:15.059 17:00:31 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:15.059 17:00:31 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.059 17:00:31 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:15.059 17:00:31 -- common/autotest_common.sh@120 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:15.059 17:00:31 -- common/autotest_common.sh@122 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:15.059 17:00:31 -- common/autotest_common.sh@124 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:15.059 17:00:31 -- common/autotest_common.sh@126 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:15.059 17:00:31 -- common/autotest_common.sh@128 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:15.059 17:00:31 -- common/autotest_common.sh@130 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:15.059 17:00:31 -- common/autotest_common.sh@132 -- # : v23.11 00:08:15.059 17:00:31 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:15.059 17:00:31 -- common/autotest_common.sh@134 -- # : true 00:08:15.059 17:00:31 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:15.059 17:00:31 -- common/autotest_common.sh@136 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:15.059 17:00:31 -- common/autotest_common.sh@138 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:15.059 17:00:31 -- common/autotest_common.sh@140 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:15.059 17:00:31 -- common/autotest_common.sh@142 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:15.059 17:00:31 -- common/autotest_common.sh@144 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:15.059 17:00:31 -- common/autotest_common.sh@146 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:15.059 17:00:31 -- common/autotest_common.sh@148 -- # : e810 00:08:15.059 17:00:31 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:15.059 17:00:31 -- common/autotest_common.sh@150 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:15.059 17:00:31 -- common/autotest_common.sh@152 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:15.059 17:00:31 -- common/autotest_common.sh@154 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:15.059 17:00:31 -- common/autotest_common.sh@156 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:15.059 17:00:31 -- common/autotest_common.sh@158 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:15.059 17:00:31 -- common/autotest_common.sh@160 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:15.059 17:00:31 -- common/autotest_common.sh@163 -- # : 00:08:15.059 17:00:31 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:15.059 17:00:31 -- common/autotest_common.sh@165 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:15.059 17:00:31 -- common/autotest_common.sh@167 -- # : 0 00:08:15.059 17:00:31 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:15.059 17:00:31 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.059 17:00:31 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.059 17:00:31 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.059 17:00:31 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.059 17:00:31 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.059 17:00:31 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:15.059 17:00:31 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:15.059 17:00:31 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.059 17:00:31 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.059 17:00:31 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.059 17:00:31 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.059 17:00:31 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:15.059 17:00:31 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:15.059 17:00:31 -- common/autotest_common.sh@196 -- # cat 00:08:15.059 17:00:31 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:15.059 17:00:31 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.059 17:00:31 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.059 17:00:31 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.059 17:00:31 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.059 17:00:31 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:15.059 17:00:31 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:15.059 17:00:31 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.059 17:00:31 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.059 17:00:31 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.059 17:00:31 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.059 17:00:31 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.059 17:00:31 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.059 17:00:31 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.059 17:00:31 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.059 17:00:31 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.059 17:00:31 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.059 17:00:31 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.059 17:00:31 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.059 17:00:31 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:15.059 17:00:31 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:15.059 17:00:31 -- common/autotest_common.sh@249 -- # valgrind= 00:08:15.059 17:00:31 -- common/autotest_common.sh@255 -- # uname -s 00:08:15.059 17:00:31 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:15.059 17:00:31 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:15.059 17:00:31 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:15.059 17:00:31 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:15.059 17:00:31 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:15.059 17:00:31 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:15.059 17:00:31 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:15.059 17:00:31 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:08:15.059 17:00:31 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:15.059 17:00:31 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:15.059 17:00:31 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:15.059 17:00:31 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:15.059 17:00:31 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:15.060 17:00:31 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:15.060 17:00:31 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:15.060 17:00:31 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:15.060 17:00:31 -- common/autotest_common.sh@309 -- # [[ -z 438622 ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@309 -- # kill -0 438622 00:08:15.060 17:00:31 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:15.060 17:00:31 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:15.060 17:00:31 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:15.060 17:00:31 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:15.060 17:00:31 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:15.060 17:00:31 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:15.060 17:00:31 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:15.060 17:00:31 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.aIf1BK 00:08:15.060 17:00:31 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:15.060 17:00:31 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aIf1BK/tests/target /tmp/spdk.aIf1BK 00:08:15.060 17:00:31 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@318 -- # df -T 00:08:15.060 17:00:31 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=52966887424 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994721280 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=9027833856 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943842304 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997360640 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=30995255296 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997360640 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=2105344 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199468032 00:08:15.060 17:00:31 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199472128 00:08:15.060 17:00:31 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:15.060 17:00:31 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.060 17:00:31 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:15.060 * Looking for test storage... 00:08:15.060 17:00:31 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:15.060 17:00:31 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:15.060 17:00:31 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.060 17:00:31 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:15.060 17:00:31 -- common/autotest_common.sh@363 -- # mount=/ 00:08:15.060 17:00:31 -- common/autotest_common.sh@365 -- # target_space=52966887424 00:08:15.060 17:00:31 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:15.060 17:00:31 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:15.060 17:00:31 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@372 -- # new_size=11242426368 00:08:15.060 17:00:31 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:15.060 17:00:31 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.060 17:00:31 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.060 17:00:31 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.060 17:00:31 -- common/autotest_common.sh@380 -- # return 0 00:08:15.060 17:00:31 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:15.060 17:00:31 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:15.060 17:00:31 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:15.060 17:00:31 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:15.060 17:00:31 -- common/autotest_common.sh@1672 -- # true 00:08:15.060 17:00:31 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:15.060 17:00:31 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:15.060 17:00:31 -- common/autotest_common.sh@27 -- # exec 00:08:15.060 17:00:31 -- common/autotest_common.sh@29 -- # exec 00:08:15.060 17:00:31 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:15.060 17:00:31 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:15.060 17:00:31 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:15.060 17:00:31 -- common/autotest_common.sh@18 -- # set -x 00:08:15.060 17:00:31 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.060 17:00:31 -- nvmf/common.sh@7 -- # uname -s 00:08:15.060 17:00:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.060 17:00:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.060 17:00:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.060 17:00:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.060 17:00:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.060 17:00:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.060 17:00:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.060 17:00:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.060 17:00:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.060 17:00:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.060 17:00:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.060 17:00:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.060 17:00:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.060 17:00:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.060 17:00:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.060 17:00:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.060 17:00:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.060 17:00:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.060 17:00:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.060 17:00:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.060 17:00:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.061 17:00:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.061 17:00:31 -- paths/export.sh@5 -- # export PATH 00:08:15.061 17:00:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.061 17:00:31 -- nvmf/common.sh@46 -- # : 0 00:08:15.061 17:00:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:15.061 17:00:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:15.061 17:00:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:15.061 17:00:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.061 17:00:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.061 17:00:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:15.061 17:00:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:15.061 17:00:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:15.061 17:00:31 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:15.061 17:00:31 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:15.061 17:00:31 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:15.061 17:00:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:15.061 17:00:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.061 17:00:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:15.061 17:00:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:15.061 17:00:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:15.061 17:00:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.061 17:00:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.061 17:00:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.061 17:00:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:15.061 17:00:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:15.061 17:00:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:15.061 17:00:31 -- common/autotest_common.sh@10 -- # set +x 00:08:17.584 17:00:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:17.584 17:00:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:17.584 17:00:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:17.584 17:00:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:17.584 17:00:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:17.584 17:00:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:17.584 17:00:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:17.584 17:00:33 -- nvmf/common.sh@294 -- # net_devs=() 00:08:17.584 17:00:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:17.584 17:00:33 -- nvmf/common.sh@295 -- # e810=() 00:08:17.584 17:00:33 -- nvmf/common.sh@295 -- # local -ga e810 00:08:17.584 17:00:33 -- nvmf/common.sh@296 -- # x722=() 00:08:17.584 17:00:33 -- nvmf/common.sh@296 -- # local -ga x722 00:08:17.584 17:00:33 -- nvmf/common.sh@297 -- # mlx=() 00:08:17.584 17:00:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:17.584 17:00:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.584 17:00:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:17.584 17:00:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:17.584 17:00:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:17.584 17:00:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:17.584 17:00:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:17.584 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:17.584 17:00:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:17.584 17:00:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:17.584 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:17.584 17:00:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:17.584 17:00:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:17.584 17:00:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.584 17:00:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:17.584 17:00:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.584 17:00:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:17.584 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:17.584 17:00:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.584 17:00:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:17.584 17:00:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.584 17:00:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:17.584 17:00:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.584 17:00:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:17.584 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:17.584 17:00:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.584 17:00:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:17.584 17:00:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:17.584 17:00:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:17.584 17:00:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.584 17:00:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.584 17:00:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.584 17:00:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:17.584 17:00:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.584 17:00:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.584 17:00:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:17.584 17:00:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.584 17:00:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.584 17:00:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:17.584 17:00:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:17.584 17:00:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.584 17:00:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.584 17:00:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.584 17:00:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.584 17:00:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:17.584 17:00:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.584 17:00:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.584 17:00:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.584 17:00:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:17.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:08:17.584 00:08:17.584 --- 10.0.0.2 ping statistics --- 00:08:17.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.584 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:17.584 17:00:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:17.584 00:08:17.584 --- 10.0.0.1 ping statistics --- 00:08:17.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.584 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:17.584 17:00:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.584 17:00:33 -- nvmf/common.sh@410 -- # return 0 00:08:17.584 17:00:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:17.584 17:00:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.584 17:00:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:17.584 17:00:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.584 17:00:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:17.584 17:00:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:17.584 17:00:33 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:17.584 17:00:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.584 17:00:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.584 17:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.584 ************************************ 00:08:17.584 START TEST nvmf_filesystem_no_in_capsule 00:08:17.584 ************************************ 00:08:17.584 17:00:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:17.584 17:00:33 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:17.584 17:00:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:17.584 17:00:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:17.584 17:00:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:17.584 17:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.584 17:00:33 -- nvmf/common.sh@469 -- # nvmfpid=440271 00:08:17.584 17:00:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.584 17:00:33 -- nvmf/common.sh@470 -- # waitforlisten 440271 00:08:17.584 17:00:33 -- common/autotest_common.sh@819 -- # '[' -z 440271 ']' 00:08:17.584 17:00:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.584 17:00:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.584 17:00:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.584 17:00:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.584 17:00:33 -- common/autotest_common.sh@10 -- # set +x 00:08:17.584 [2024-07-20 17:00:33.405909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:17.584 [2024-07-20 17:00:33.405999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.584 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.584 [2024-07-20 17:00:33.476914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.584 [2024-07-20 17:00:33.570271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.584 [2024-07-20 17:00:33.570444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.585 [2024-07-20 17:00:33.570465] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.585 [2024-07-20 17:00:33.570481] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.585 [2024-07-20 17:00:33.570573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.585 [2024-07-20 17:00:33.570630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.585 [2024-07-20 17:00:33.570685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.585 [2024-07-20 17:00:33.570687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.516 17:00:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:18.516 17:00:34 -- common/autotest_common.sh@852 -- # return 0 00:08:18.516 17:00:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:18.516 17:00:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 17:00:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.516 17:00:34 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:18.516 17:00:34 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:18.516 17:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 [2024-07-20 17:00:34.399426] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.516 17:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.516 17:00:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:18.516 17:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 Malloc1 00:08:18.516 17:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.516 17:00:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:18.516 17:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 17:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.516 17:00:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:18.516 17:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 17:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.516 17:00:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.516 17:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 [2024-07-20 17:00:34.588242] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.516 17:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.516 17:00:34 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:18.516 17:00:34 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:18.516 17:00:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:18.516 17:00:34 -- common/autotest_common.sh@1359 -- # local bs 00:08:18.516 17:00:34 -- common/autotest_common.sh@1360 -- # local nb 00:08:18.516 17:00:34 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:18.516 17:00:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.516 17:00:34 -- common/autotest_common.sh@10 -- # set +x 00:08:18.516 17:00:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.516 17:00:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:18.516 { 00:08:18.516 "name": "Malloc1", 00:08:18.516 "aliases": [ 00:08:18.516 "c929f024-f6da-4be6-9c12-4af19aa43bbe" 00:08:18.516 ], 00:08:18.516 "product_name": "Malloc disk", 00:08:18.516 "block_size": 512, 00:08:18.516 "num_blocks": 1048576, 00:08:18.516 "uuid": "c929f024-f6da-4be6-9c12-4af19aa43bbe", 00:08:18.516 "assigned_rate_limits": { 00:08:18.516 "rw_ios_per_sec": 0, 00:08:18.516 "rw_mbytes_per_sec": 0, 00:08:18.516 "r_mbytes_per_sec": 0, 00:08:18.516 "w_mbytes_per_sec": 0 00:08:18.516 }, 00:08:18.516 "claimed": true, 00:08:18.516 "claim_type": "exclusive_write", 00:08:18.516 "zoned": false, 00:08:18.516 "supported_io_types": { 00:08:18.516 "read": true, 00:08:18.516 "write": true, 00:08:18.516 "unmap": true, 00:08:18.516 "write_zeroes": true, 00:08:18.516 "flush": true, 00:08:18.516 "reset": true, 00:08:18.516 "compare": false, 00:08:18.516 "compare_and_write": false, 00:08:18.516 "abort": true, 00:08:18.516 "nvme_admin": false, 00:08:18.516 "nvme_io": false 00:08:18.516 }, 00:08:18.516 "memory_domains": [ 00:08:18.516 { 00:08:18.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.516 "dma_device_type": 2 00:08:18.516 } 00:08:18.516 ], 00:08:18.516 "driver_specific": {} 00:08:18.516 } 00:08:18.516 ]' 00:08:18.516 17:00:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:18.516 17:00:34 -- common/autotest_common.sh@1362 -- # bs=512 00:08:18.516 17:00:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:18.774 17:00:34 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:18.774 17:00:34 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:18.774 17:00:34 -- common/autotest_common.sh@1367 -- # echo 512 00:08:18.774 17:00:34 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:18.774 17:00:34 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.337 17:00:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.337 17:00:35 -- common/autotest_common.sh@1177 -- # local i=0 00:08:19.337 17:00:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.337 17:00:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:19.337 17:00:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:21.231 17:00:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:21.231 17:00:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:21.231 17:00:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:21.231 17:00:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:21.231 17:00:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:21.231 17:00:37 -- common/autotest_common.sh@1187 -- # return 0 00:08:21.231 17:00:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:21.231 17:00:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:21.231 17:00:37 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:21.231 17:00:37 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:21.231 17:00:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:21.231 17:00:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:21.231 17:00:37 -- setup/common.sh@80 -- # echo 536870912 00:08:21.231 17:00:37 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:21.231 17:00:37 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:21.231 17:00:37 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:21.231 17:00:37 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:21.795 17:00:37 -- target/filesystem.sh@69 -- # partprobe 00:08:22.357 17:00:38 -- target/filesystem.sh@70 -- # sleep 1 00:08:23.286 17:00:39 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:23.286 17:00:39 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:23.286 17:00:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:23.286 17:00:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:23.286 17:00:39 -- common/autotest_common.sh@10 -- # set +x 00:08:23.286 ************************************ 00:08:23.286 START TEST filesystem_ext4 00:08:23.286 ************************************ 00:08:23.286 17:00:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:23.286 17:00:39 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:23.286 17:00:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:23.286 17:00:39 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:23.286 17:00:39 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:23.286 17:00:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:23.286 17:00:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:23.286 17:00:39 -- common/autotest_common.sh@905 -- # local force 00:08:23.286 17:00:39 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:23.286 17:00:39 -- common/autotest_common.sh@908 -- # force=-F 00:08:23.286 17:00:39 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:23.286 mke2fs 1.46.5 (30-Dec-2021) 00:08:23.286 Discarding device blocks: 0/522240 done 00:08:23.286 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:23.286 Filesystem UUID: ec63a192-10e5-4a71-868d-f58805c57513 00:08:23.286 Superblock backups stored on blocks: 00:08:23.286 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:23.286 00:08:23.286 Allocating group tables: 0/64 done 00:08:23.286 Writing inode tables: 0/64 done 00:08:24.215 Creating journal (8192 blocks): done 00:08:25.173 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:08:25.173 00:08:25.173 17:00:41 -- common/autotest_common.sh@921 -- # return 0 00:08:25.173 17:00:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.430 17:00:41 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.430 17:00:41 -- target/filesystem.sh@25 -- # sync 00:08:25.430 17:00:41 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.430 17:00:41 -- target/filesystem.sh@27 -- # sync 00:08:25.430 17:00:41 -- target/filesystem.sh@29 -- # i=0 00:08:25.430 17:00:41 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.430 17:00:41 -- target/filesystem.sh@37 -- # kill -0 440271 00:08:25.430 17:00:41 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.430 17:00:41 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.430 17:00:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.430 17:00:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.430 00:08:25.430 real 0m2.195s 00:08:25.430 user 0m0.010s 00:08:25.430 sys 0m0.041s 00:08:25.430 17:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.430 17:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.430 ************************************ 00:08:25.430 END TEST filesystem_ext4 00:08:25.430 ************************************ 00:08:25.430 17:00:41 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:25.430 17:00:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.430 17:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.430 17:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:25.430 ************************************ 00:08:25.430 START TEST filesystem_btrfs 00:08:25.430 ************************************ 00:08:25.430 17:00:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:25.430 17:00:41 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:25.430 17:00:41 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.430 17:00:41 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:25.430 17:00:41 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:25.430 17:00:41 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.430 17:00:41 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.430 17:00:41 -- common/autotest_common.sh@905 -- # local force 00:08:25.430 17:00:41 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:25.430 17:00:41 -- common/autotest_common.sh@910 -- # force=-f 00:08:25.430 17:00:41 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.994 btrfs-progs v6.6.2 00:08:25.994 See https://btrfs.readthedocs.io for more information. 00:08:25.994 00:08:25.994 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.994 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.994 this does not affect your deployments: 00:08:25.994 - DUP for metadata (-m dup) 00:08:25.994 - enabled no-holes (-O no-holes) 00:08:25.994 - enabled free-space-tree (-R free-space-tree) 00:08:25.994 00:08:25.994 Label: (null) 00:08:25.994 UUID: 94200b64-cd2f-4adf-9d45-3ccc13ddbdea 00:08:25.994 Node size: 16384 00:08:25.994 Sector size: 4096 00:08:25.994 Filesystem size: 510.00MiB 00:08:25.994 Block group profiles: 00:08:25.994 Data: single 8.00MiB 00:08:25.994 Metadata: DUP 32.00MiB 00:08:25.994 System: DUP 8.00MiB 00:08:25.994 SSD detected: yes 00:08:25.994 Zoned device: no 00:08:25.994 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.994 Runtime features: free-space-tree 00:08:25.994 Checksum: crc32c 00:08:25.994 Number of devices: 1 00:08:25.994 Devices: 00:08:25.994 ID SIZE PATH 00:08:25.995 1 510.00MiB /dev/nvme0n1p1 00:08:25.995 00:08:25.995 17:00:41 -- common/autotest_common.sh@921 -- # return 0 00:08:25.995 17:00:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.558 17:00:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:26.815 17:00:42 -- target/filesystem.sh@25 -- # sync 00:08:26.815 17:00:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:26.815 17:00:42 -- target/filesystem.sh@27 -- # sync 00:08:26.815 17:00:42 -- target/filesystem.sh@29 -- # i=0 00:08:26.815 17:00:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:26.815 17:00:42 -- target/filesystem.sh@37 -- # kill -0 440271 00:08:26.815 17:00:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.815 17:00:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.815 17:00:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.815 17:00:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.815 00:08:26.815 real 0m1.256s 00:08:26.815 user 0m0.015s 00:08:26.815 sys 0m0.046s 00:08:26.815 17:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.815 17:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:26.815 ************************************ 00:08:26.815 END TEST filesystem_btrfs 00:08:26.815 ************************************ 00:08:26.815 17:00:42 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:26.815 17:00:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:26.815 17:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.815 17:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:26.815 ************************************ 00:08:26.815 START TEST filesystem_xfs 00:08:26.815 ************************************ 00:08:26.815 17:00:42 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:26.815 17:00:42 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:26.815 17:00:42 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.815 17:00:42 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:26.815 17:00:42 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:26.815 17:00:42 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:26.815 17:00:42 -- common/autotest_common.sh@904 -- # local i=0 00:08:26.815 17:00:42 -- common/autotest_common.sh@905 -- # local force 00:08:26.815 17:00:42 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:26.815 17:00:42 -- common/autotest_common.sh@910 -- # force=-f 00:08:26.815 17:00:42 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:26.815 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:26.815 = sectsz=512 attr=2, projid32bit=1 00:08:26.815 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:26.815 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:26.815 data = bsize=4096 blocks=130560, imaxpct=25 00:08:26.815 = sunit=0 swidth=0 blks 00:08:26.815 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:26.815 log =internal log bsize=4096 blocks=16384, version=2 00:08:26.815 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:26.815 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:27.743 Discarding blocks...Done. 00:08:27.743 17:00:43 -- common/autotest_common.sh@921 -- # return 0 00:08:27.743 17:00:43 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.312 17:00:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.312 17:00:46 -- target/filesystem.sh@25 -- # sync 00:08:30.312 17:00:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.312 17:00:46 -- target/filesystem.sh@27 -- # sync 00:08:30.312 17:00:46 -- target/filesystem.sh@29 -- # i=0 00:08:30.312 17:00:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.312 17:00:46 -- target/filesystem.sh@37 -- # kill -0 440271 00:08:30.312 17:00:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.312 17:00:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.312 17:00:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.312 17:00:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.312 00:08:30.312 real 0m3.523s 00:08:30.312 user 0m0.014s 00:08:30.312 sys 0m0.041s 00:08:30.312 17:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.312 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.312 ************************************ 00:08:30.312 END TEST filesystem_xfs 00:08:30.312 ************************************ 00:08:30.312 17:00:46 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:30.312 17:00:46 -- target/filesystem.sh@93 -- # sync 00:08:30.312 17:00:46 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.312 17:00:46 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.312 17:00:46 -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.312 17:00:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:30.312 17:00:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.312 17:00:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:30.312 17:00:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.312 17:00:46 -- common/autotest_common.sh@1210 -- # return 0 00:08:30.312 17:00:46 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.312 17:00:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.312 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.569 17:00:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.569 17:00:46 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:30.569 17:00:46 -- target/filesystem.sh@101 -- # killprocess 440271 00:08:30.569 17:00:46 -- common/autotest_common.sh@926 -- # '[' -z 440271 ']' 00:08:30.569 17:00:46 -- common/autotest_common.sh@930 -- # kill -0 440271 00:08:30.569 17:00:46 -- common/autotest_common.sh@931 -- # uname 00:08:30.569 17:00:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:30.569 17:00:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 440271 00:08:30.569 17:00:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:30.569 17:00:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:30.569 17:00:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 440271' 00:08:30.569 killing process with pid 440271 00:08:30.569 17:00:46 -- common/autotest_common.sh@945 -- # kill 440271 00:08:30.569 17:00:46 -- common/autotest_common.sh@950 -- # wait 440271 00:08:30.827 17:00:46 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:30.827 00:08:30.827 real 0m13.583s 00:08:30.827 user 0m52.287s 00:08:30.827 sys 0m1.844s 00:08:30.827 17:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.827 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.827 ************************************ 00:08:30.827 END TEST nvmf_filesystem_no_in_capsule 00:08:30.827 ************************************ 00:08:30.827 17:00:46 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:30.827 17:00:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:30.827 17:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.827 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.827 ************************************ 00:08:30.827 START TEST nvmf_filesystem_in_capsule 00:08:30.827 ************************************ 00:08:30.827 17:00:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:30.827 17:00:46 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:30.827 17:00:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:30.827 17:00:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:30.827 17:00:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:30.827 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:30.827 17:00:46 -- nvmf/common.sh@469 -- # nvmfpid=442129 00:08:30.827 17:00:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.827 17:00:46 -- nvmf/common.sh@470 -- # waitforlisten 442129 00:08:30.827 17:00:46 -- common/autotest_common.sh@819 -- # '[' -z 442129 ']' 00:08:30.827 17:00:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.827 17:00:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:30.827 17:00:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.827 17:00:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:30.827 17:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:31.085 [2024-07-20 17:00:47.018940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:31.085 [2024-07-20 17:00:47.019019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.085 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.085 [2024-07-20 17:00:47.086355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.085 [2024-07-20 17:00:47.170716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.085 [2024-07-20 17:00:47.170878] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.085 [2024-07-20 17:00:47.170902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.085 [2024-07-20 17:00:47.170915] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.085 [2024-07-20 17:00:47.170963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.085 [2024-07-20 17:00:47.171021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.085 [2024-07-20 17:00:47.171087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.085 [2024-07-20 17:00:47.171090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.015 17:00:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.015 17:00:48 -- common/autotest_common.sh@852 -- # return 0 00:08:32.015 17:00:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:32.015 17:00:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:32.015 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.015 17:00:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.015 17:00:48 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:32.015 17:00:48 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:32.015 17:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.015 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.015 [2024-07-20 17:00:48.039493] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.015 17:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.015 17:00:48 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:32.015 17:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.015 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.272 Malloc1 00:08:32.272 17:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.272 17:00:48 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.272 17:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.272 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.272 17:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.272 17:00:48 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:32.272 17:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.272 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.272 17:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.272 17:00:48 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.272 17:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.272 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.272 [2024-07-20 17:00:48.220233] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.272 17:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.272 17:00:48 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:32.272 17:00:48 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:32.272 17:00:48 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:32.272 17:00:48 -- common/autotest_common.sh@1359 -- # local bs 00:08:32.272 17:00:48 -- common/autotest_common.sh@1360 -- # local nb 00:08:32.272 17:00:48 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:32.272 17:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.272 17:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:32.272 17:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.272 17:00:48 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:32.272 { 00:08:32.272 "name": "Malloc1", 00:08:32.272 "aliases": [ 00:08:32.272 "a5e99538-1a99-46be-a074-cb527e456da2" 00:08:32.272 ], 00:08:32.272 "product_name": "Malloc disk", 00:08:32.272 "block_size": 512, 00:08:32.272 "num_blocks": 1048576, 00:08:32.272 "uuid": "a5e99538-1a99-46be-a074-cb527e456da2", 00:08:32.272 "assigned_rate_limits": { 00:08:32.272 "rw_ios_per_sec": 0, 00:08:32.272 "rw_mbytes_per_sec": 0, 00:08:32.272 "r_mbytes_per_sec": 0, 00:08:32.272 "w_mbytes_per_sec": 0 00:08:32.272 }, 00:08:32.272 "claimed": true, 00:08:32.272 "claim_type": "exclusive_write", 00:08:32.272 "zoned": false, 00:08:32.272 "supported_io_types": { 00:08:32.272 "read": true, 00:08:32.272 "write": true, 00:08:32.272 "unmap": true, 00:08:32.272 "write_zeroes": true, 00:08:32.272 "flush": true, 00:08:32.272 "reset": true, 00:08:32.272 "compare": false, 00:08:32.272 "compare_and_write": false, 00:08:32.272 "abort": true, 00:08:32.272 "nvme_admin": false, 00:08:32.272 "nvme_io": false 00:08:32.272 }, 00:08:32.272 "memory_domains": [ 00:08:32.272 { 00:08:32.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.272 "dma_device_type": 2 00:08:32.272 } 00:08:32.272 ], 00:08:32.272 "driver_specific": {} 00:08:32.272 } 00:08:32.272 ]' 00:08:32.272 17:00:48 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:32.272 17:00:48 -- common/autotest_common.sh@1362 -- # bs=512 00:08:32.272 17:00:48 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:32.272 17:00:48 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:32.272 17:00:48 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:32.272 17:00:48 -- common/autotest_common.sh@1367 -- # echo 512 00:08:32.272 17:00:48 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:32.272 17:00:48 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.836 17:00:48 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.836 17:00:48 -- common/autotest_common.sh@1177 -- # local i=0 00:08:32.836 17:00:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.836 17:00:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:32.836 17:00:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:35.359 17:00:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:35.359 17:00:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:35.359 17:00:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:35.359 17:00:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:35.359 17:00:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:35.359 17:00:50 -- common/autotest_common.sh@1187 -- # return 0 00:08:35.359 17:00:50 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:35.359 17:00:50 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:35.359 17:00:50 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:35.359 17:00:50 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:35.359 17:00:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:35.359 17:00:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:35.359 17:00:50 -- setup/common.sh@80 -- # echo 536870912 00:08:35.359 17:00:50 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:35.359 17:00:50 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:35.359 17:00:50 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:35.359 17:00:50 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:35.359 17:00:51 -- target/filesystem.sh@69 -- # partprobe 00:08:35.923 17:00:51 -- target/filesystem.sh@70 -- # sleep 1 00:08:36.854 17:00:52 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:36.854 17:00:52 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:36.854 17:00:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:36.854 17:00:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:36.854 17:00:52 -- common/autotest_common.sh@10 -- # set +x 00:08:36.854 ************************************ 00:08:36.854 START TEST filesystem_in_capsule_ext4 00:08:36.854 ************************************ 00:08:36.854 17:00:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:36.854 17:00:52 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:36.854 17:00:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.854 17:00:52 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:36.854 17:00:52 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:36.854 17:00:52 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:36.854 17:00:52 -- common/autotest_common.sh@904 -- # local i=0 00:08:36.854 17:00:52 -- common/autotest_common.sh@905 -- # local force 00:08:36.854 17:00:52 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:36.854 17:00:52 -- common/autotest_common.sh@908 -- # force=-F 00:08:36.854 17:00:52 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:36.854 mke2fs 1.46.5 (30-Dec-2021) 00:08:36.854 Discarding device blocks: 0/522240 done 00:08:36.854 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:36.854 Filesystem UUID: 18035a79-39f4-4c8c-8918-df14a2d68fc1 00:08:36.854 Superblock backups stored on blocks: 00:08:36.854 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:36.854 00:08:36.854 Allocating group tables: 0/64 done 00:08:36.854 Writing inode tables: 0/64 done 00:08:37.110 Creating journal (8192 blocks): done 00:08:38.190 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:08:38.190 00:08:38.190 17:00:54 -- common/autotest_common.sh@921 -- # return 0 00:08:38.190 17:00:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:39.124 17:00:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:39.124 17:00:55 -- target/filesystem.sh@25 -- # sync 00:08:39.124 17:00:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:39.124 17:00:55 -- target/filesystem.sh@27 -- # sync 00:08:39.124 17:00:55 -- target/filesystem.sh@29 -- # i=0 00:08:39.124 17:00:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:39.124 17:00:55 -- target/filesystem.sh@37 -- # kill -0 442129 00:08:39.124 17:00:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:39.124 17:00:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:39.124 17:00:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:39.124 17:00:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:39.124 00:08:39.124 real 0m2.214s 00:08:39.124 user 0m0.009s 00:08:39.124 sys 0m0.040s 00:08:39.124 17:00:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.124 17:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:39.124 ************************************ 00:08:39.124 END TEST filesystem_in_capsule_ext4 00:08:39.124 ************************************ 00:08:39.124 17:00:55 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:39.124 17:00:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:39.124 17:00:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.124 17:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:39.124 ************************************ 00:08:39.124 START TEST filesystem_in_capsule_btrfs 00:08:39.124 ************************************ 00:08:39.124 17:00:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:39.124 17:00:55 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:39.124 17:00:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.124 17:00:55 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:39.124 17:00:55 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:39.124 17:00:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:39.124 17:00:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:39.124 17:00:55 -- common/autotest_common.sh@905 -- # local force 00:08:39.124 17:00:55 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:39.124 17:00:55 -- common/autotest_common.sh@910 -- # force=-f 00:08:39.124 17:00:55 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:39.697 btrfs-progs v6.6.2 00:08:39.697 See https://btrfs.readthedocs.io for more information. 00:08:39.697 00:08:39.697 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:39.697 NOTE: several default settings have changed in version 5.15, please make sure 00:08:39.697 this does not affect your deployments: 00:08:39.697 - DUP for metadata (-m dup) 00:08:39.697 - enabled no-holes (-O no-holes) 00:08:39.697 - enabled free-space-tree (-R free-space-tree) 00:08:39.697 00:08:39.697 Label: (null) 00:08:39.697 UUID: 0fb66260-105c-48b8-8e80-e77c15f36307 00:08:39.697 Node size: 16384 00:08:39.697 Sector size: 4096 00:08:39.697 Filesystem size: 510.00MiB 00:08:39.697 Block group profiles: 00:08:39.697 Data: single 8.00MiB 00:08:39.697 Metadata: DUP 32.00MiB 00:08:39.697 System: DUP 8.00MiB 00:08:39.697 SSD detected: yes 00:08:39.697 Zoned device: no 00:08:39.697 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:39.697 Runtime features: free-space-tree 00:08:39.697 Checksum: crc32c 00:08:39.697 Number of devices: 1 00:08:39.697 Devices: 00:08:39.697 ID SIZE PATH 00:08:39.697 1 510.00MiB /dev/nvme0n1p1 00:08:39.697 00:08:39.697 17:00:55 -- common/autotest_common.sh@921 -- # return 0 00:08:39.697 17:00:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:40.261 17:00:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:40.262 17:00:56 -- target/filesystem.sh@25 -- # sync 00:08:40.262 17:00:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:40.262 17:00:56 -- target/filesystem.sh@27 -- # sync 00:08:40.519 17:00:56 -- target/filesystem.sh@29 -- # i=0 00:08:40.519 17:00:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:40.519 17:00:56 -- target/filesystem.sh@37 -- # kill -0 442129 00:08:40.519 17:00:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:40.519 17:00:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:40.519 17:00:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:40.519 17:00:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:40.519 00:08:40.519 real 0m1.294s 00:08:40.519 user 0m0.013s 00:08:40.519 sys 0m0.045s 00:08:40.519 17:00:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.519 17:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:40.519 ************************************ 00:08:40.519 END TEST filesystem_in_capsule_btrfs 00:08:40.519 ************************************ 00:08:40.519 17:00:56 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:40.519 17:00:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:40.519 17:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.519 17:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:40.519 ************************************ 00:08:40.519 START TEST filesystem_in_capsule_xfs 00:08:40.519 ************************************ 00:08:40.519 17:00:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:40.519 17:00:56 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:40.519 17:00:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:40.519 17:00:56 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:40.519 17:00:56 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:40.519 17:00:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:40.519 17:00:56 -- common/autotest_common.sh@904 -- # local i=0 00:08:40.519 17:00:56 -- common/autotest_common.sh@905 -- # local force 00:08:40.519 17:00:56 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:40.519 17:00:56 -- common/autotest_common.sh@910 -- # force=-f 00:08:40.519 17:00:56 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:40.519 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:40.519 = sectsz=512 attr=2, projid32bit=1 00:08:40.519 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:40.519 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:40.519 data = bsize=4096 blocks=130560, imaxpct=25 00:08:40.519 = sunit=0 swidth=0 blks 00:08:40.519 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:40.519 log =internal log bsize=4096 blocks=16384, version=2 00:08:40.519 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:40.520 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:41.460 Discarding blocks...Done. 00:08:41.460 17:00:57 -- common/autotest_common.sh@921 -- # return 0 00:08:41.460 17:00:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:43.995 17:00:59 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:43.995 17:00:59 -- target/filesystem.sh@25 -- # sync 00:08:43.995 17:00:59 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:43.995 17:00:59 -- target/filesystem.sh@27 -- # sync 00:08:43.995 17:00:59 -- target/filesystem.sh@29 -- # i=0 00:08:43.995 17:00:59 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:43.995 17:00:59 -- target/filesystem.sh@37 -- # kill -0 442129 00:08:43.995 17:00:59 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:43.995 17:00:59 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:43.995 17:00:59 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:43.995 17:00:59 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:43.995 00:08:43.995 real 0m3.535s 00:08:43.995 user 0m0.014s 00:08:43.995 sys 0m0.042s 00:08:43.995 17:00:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.995 17:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:43.995 ************************************ 00:08:43.995 END TEST filesystem_in_capsule_xfs 00:08:43.995 ************************************ 00:08:43.995 17:01:00 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:44.259 17:01:00 -- target/filesystem.sh@93 -- # sync 00:08:44.259 17:01:00 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.259 17:01:00 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.259 17:01:00 -- common/autotest_common.sh@1198 -- # local i=0 00:08:44.259 17:01:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:44.259 17:01:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.259 17:01:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:44.259 17:01:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.259 17:01:00 -- common/autotest_common.sh@1210 -- # return 0 00:08:44.259 17:01:00 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.259 17:01:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:44.259 17:01:00 -- common/autotest_common.sh@10 -- # set +x 00:08:44.541 17:01:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:44.541 17:01:00 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:44.541 17:01:00 -- target/filesystem.sh@101 -- # killprocess 442129 00:08:44.541 17:01:00 -- common/autotest_common.sh@926 -- # '[' -z 442129 ']' 00:08:44.541 17:01:00 -- common/autotest_common.sh@930 -- # kill -0 442129 00:08:44.541 17:01:00 -- common/autotest_common.sh@931 -- # uname 00:08:44.542 17:01:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:44.542 17:01:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 442129 00:08:44.542 17:01:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:44.542 17:01:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:44.542 17:01:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 442129' 00:08:44.542 killing process with pid 442129 00:08:44.542 17:01:00 -- common/autotest_common.sh@945 -- # kill 442129 00:08:44.542 17:01:00 -- common/autotest_common.sh@950 -- # wait 442129 00:08:44.799 17:01:00 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:44.799 00:08:44.799 real 0m13.922s 00:08:44.799 user 0m53.705s 00:08:44.799 sys 0m1.876s 00:08:44.799 17:01:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.799 17:01:00 -- common/autotest_common.sh@10 -- # set +x 00:08:44.799 ************************************ 00:08:44.799 END TEST nvmf_filesystem_in_capsule 00:08:44.799 ************************************ 00:08:44.799 17:01:00 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:44.799 17:01:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:44.799 17:01:00 -- nvmf/common.sh@116 -- # sync 00:08:44.799 17:01:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:44.799 17:01:00 -- nvmf/common.sh@119 -- # set +e 00:08:44.799 17:01:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:44.799 17:01:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:44.799 rmmod nvme_tcp 00:08:44.800 rmmod nvme_fabrics 00:08:44.800 rmmod nvme_keyring 00:08:45.057 17:01:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:45.057 17:01:00 -- nvmf/common.sh@123 -- # set -e 00:08:45.057 17:01:00 -- nvmf/common.sh@124 -- # return 0 00:08:45.057 17:01:00 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:45.057 17:01:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:45.057 17:01:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:45.057 17:01:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:45.057 17:01:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.057 17:01:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:45.057 17:01:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.057 17:01:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.057 17:01:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.958 17:01:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:46.958 00:08:46.959 real 0m32.017s 00:08:46.959 user 1m46.894s 00:08:46.959 sys 0m5.333s 00:08:46.959 17:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.959 17:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.959 ************************************ 00:08:46.959 END TEST nvmf_filesystem 00:08:46.959 ************************************ 00:08:46.959 17:01:03 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:46.959 17:01:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:46.959 17:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.959 17:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:46.959 ************************************ 00:08:46.959 START TEST nvmf_discovery 00:08:46.959 ************************************ 00:08:46.959 17:01:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:46.959 * Looking for test storage... 00:08:46.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.959 17:01:03 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.959 17:01:03 -- nvmf/common.sh@7 -- # uname -s 00:08:46.959 17:01:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.959 17:01:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.959 17:01:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.959 17:01:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.959 17:01:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.959 17:01:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.959 17:01:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.959 17:01:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.959 17:01:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.959 17:01:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.959 17:01:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.959 17:01:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.959 17:01:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.959 17:01:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.959 17:01:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.959 17:01:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.959 17:01:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.959 17:01:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.959 17:01:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.959 17:01:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.959 17:01:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.959 17:01:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.959 17:01:03 -- paths/export.sh@5 -- # export PATH 00:08:46.959 17:01:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.959 17:01:03 -- nvmf/common.sh@46 -- # : 0 00:08:46.959 17:01:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:46.959 17:01:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:46.959 17:01:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:46.959 17:01:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.959 17:01:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.959 17:01:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:46.959 17:01:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:46.959 17:01:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:46.959 17:01:03 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:46.959 17:01:03 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:46.959 17:01:03 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:46.959 17:01:03 -- target/discovery.sh@15 -- # hash nvme 00:08:46.959 17:01:03 -- target/discovery.sh@20 -- # nvmftestinit 00:08:46.959 17:01:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:46.959 17:01:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.959 17:01:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:46.959 17:01:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:46.959 17:01:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:46.959 17:01:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.959 17:01:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.959 17:01:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.959 17:01:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:46.959 17:01:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:46.959 17:01:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:46.959 17:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 17:01:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:49.498 17:01:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:49.498 17:01:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:49.498 17:01:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:49.498 17:01:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:49.498 17:01:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:49.498 17:01:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:49.498 17:01:05 -- nvmf/common.sh@294 -- # net_devs=() 00:08:49.498 17:01:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:49.498 17:01:05 -- nvmf/common.sh@295 -- # e810=() 00:08:49.498 17:01:05 -- nvmf/common.sh@295 -- # local -ga e810 00:08:49.498 17:01:05 -- nvmf/common.sh@296 -- # x722=() 00:08:49.498 17:01:05 -- nvmf/common.sh@296 -- # local -ga x722 00:08:49.498 17:01:05 -- nvmf/common.sh@297 -- # mlx=() 00:08:49.498 17:01:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:49.498 17:01:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.498 17:01:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:49.498 17:01:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:49.498 17:01:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:49.498 17:01:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.498 17:01:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.498 17:01:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:49.498 17:01:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.498 17:01:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:49.498 17:01:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.498 17:01:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.498 17:01:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.498 17:01:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.498 17:01:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.498 17:01:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.498 17:01:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:49.498 17:01:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.498 17:01:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:49.498 17:01:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.498 17:01:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.498 17:01:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.498 17:01:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:49.498 17:01:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:49.498 17:01:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:49.498 17:01:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.498 17:01:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.498 17:01:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.498 17:01:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:49.498 17:01:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.498 17:01:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.498 17:01:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:49.498 17:01:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.498 17:01:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.498 17:01:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:49.498 17:01:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:49.498 17:01:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.498 17:01:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.498 17:01:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.498 17:01:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.498 17:01:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:49.498 17:01:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.498 17:01:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.498 17:01:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.498 17:01:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:49.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:08:49.498 00:08:49.498 --- 10.0.0.2 ping statistics --- 00:08:49.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.498 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:08:49.498 17:01:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:08:49.498 00:08:49.498 --- 10.0.0.1 ping statistics --- 00:08:49.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.498 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:08:49.498 17:01:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.498 17:01:05 -- nvmf/common.sh@410 -- # return 0 00:08:49.498 17:01:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:49.498 17:01:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.498 17:01:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:49.498 17:01:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.498 17:01:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:49.498 17:01:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:49.498 17:01:05 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:49.498 17:01:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:49.498 17:01:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:49.498 17:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 17:01:05 -- nvmf/common.sh@469 -- # nvmfpid=445930 00:08:49.498 17:01:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.498 17:01:05 -- nvmf/common.sh@470 -- # waitforlisten 445930 00:08:49.498 17:01:05 -- common/autotest_common.sh@819 -- # '[' -z 445930 ']' 00:08:49.498 17:01:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.498 17:01:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.498 17:01:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.498 17:01:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.498 17:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:49.498 [2024-07-20 17:01:05.314825] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:49.498 [2024-07-20 17:01:05.314914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.498 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.498 [2024-07-20 17:01:05.385086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.498 [2024-07-20 17:01:05.476550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:49.498 [2024-07-20 17:01:05.476687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.498 [2024-07-20 17:01:05.476704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.498 [2024-07-20 17:01:05.476718] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.498 [2024-07-20 17:01:05.476783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.498 [2024-07-20 17:01:05.476813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.498 [2024-07-20 17:01:05.476832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.498 [2024-07-20 17:01:05.476835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.431 17:01:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:50.431 17:01:06 -- common/autotest_common.sh@852 -- # return 0 00:08:50.431 17:01:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:50.431 17:01:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.431 17:01:06 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 [2024-07-20 17:01:06.288319] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@26 -- # seq 1 4 00:08:50.431 17:01:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.431 17:01:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 Null1 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 [2024-07-20 17:01:06.328643] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.431 17:01:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 Null2 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.431 17:01:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 Null3 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:50.431 17:01:06 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 Null4 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.431 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.431 17:01:06 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:50.431 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.431 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.432 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.432 17:01:06 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:50.432 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.432 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.432 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.432 17:01:06 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:50.432 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.432 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.432 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.432 17:01:06 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:50.689 00:08:50.689 Discovery Log Number of Records 6, Generation counter 6 00:08:50.689 =====Discovery Log Entry 0====== 00:08:50.689 trtype: tcp 00:08:50.689 adrfam: ipv4 00:08:50.689 subtype: current discovery subsystem 00:08:50.689 treq: not required 00:08:50.689 portid: 0 00:08:50.689 trsvcid: 4420 00:08:50.689 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.689 traddr: 10.0.0.2 00:08:50.689 eflags: explicit discovery connections, duplicate discovery information 00:08:50.689 sectype: none 00:08:50.689 =====Discovery Log Entry 1====== 00:08:50.689 trtype: tcp 00:08:50.689 adrfam: ipv4 00:08:50.689 subtype: nvme subsystem 00:08:50.689 treq: not required 00:08:50.689 portid: 0 00:08:50.689 trsvcid: 4420 00:08:50.689 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:50.689 traddr: 10.0.0.2 00:08:50.689 eflags: none 00:08:50.689 sectype: none 00:08:50.689 =====Discovery Log Entry 2====== 00:08:50.689 trtype: tcp 00:08:50.689 adrfam: ipv4 00:08:50.689 subtype: nvme subsystem 00:08:50.689 treq: not required 00:08:50.689 portid: 0 00:08:50.689 trsvcid: 4420 00:08:50.689 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:50.689 traddr: 10.0.0.2 00:08:50.689 eflags: none 00:08:50.689 sectype: none 00:08:50.689 =====Discovery Log Entry 3====== 00:08:50.689 trtype: tcp 00:08:50.690 adrfam: ipv4 00:08:50.690 subtype: nvme subsystem 00:08:50.690 treq: not required 00:08:50.690 portid: 0 00:08:50.690 trsvcid: 4420 00:08:50.690 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:50.690 traddr: 10.0.0.2 00:08:50.690 eflags: none 00:08:50.690 sectype: none 00:08:50.690 =====Discovery Log Entry 4====== 00:08:50.690 trtype: tcp 00:08:50.690 adrfam: ipv4 00:08:50.690 subtype: nvme subsystem 00:08:50.690 treq: not required 00:08:50.690 portid: 0 00:08:50.690 trsvcid: 4420 00:08:50.690 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:50.690 traddr: 10.0.0.2 00:08:50.690 eflags: none 00:08:50.690 sectype: none 00:08:50.690 =====Discovery Log Entry 5====== 00:08:50.690 trtype: tcp 00:08:50.690 adrfam: ipv4 00:08:50.690 subtype: discovery subsystem referral 00:08:50.690 treq: not required 00:08:50.690 portid: 0 00:08:50.690 trsvcid: 4430 00:08:50.690 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:50.690 traddr: 10.0.0.2 00:08:50.690 eflags: none 00:08:50.690 sectype: none 00:08:50.690 17:01:06 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:50.690 Perform nvmf subsystem discovery via RPC 00:08:50.690 17:01:06 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 [2024-07-20 17:01:06.637497] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:50.690 [ 00:08:50.690 { 00:08:50.690 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:50.690 "subtype": "Discovery", 00:08:50.690 "listen_addresses": [ 00:08:50.690 { 00:08:50.690 "transport": "TCP", 00:08:50.690 "trtype": "TCP", 00:08:50.690 "adrfam": "IPv4", 00:08:50.690 "traddr": "10.0.0.2", 00:08:50.690 "trsvcid": "4420" 00:08:50.690 } 00:08:50.690 ], 00:08:50.690 "allow_any_host": true, 00:08:50.690 "hosts": [] 00:08:50.690 }, 00:08:50.690 { 00:08:50.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.690 "subtype": "NVMe", 00:08:50.690 "listen_addresses": [ 00:08:50.690 { 00:08:50.690 "transport": "TCP", 00:08:50.690 "trtype": "TCP", 00:08:50.690 "adrfam": "IPv4", 00:08:50.690 "traddr": "10.0.0.2", 00:08:50.690 "trsvcid": "4420" 00:08:50.690 } 00:08:50.690 ], 00:08:50.690 "allow_any_host": true, 00:08:50.690 "hosts": [], 00:08:50.690 "serial_number": "SPDK00000000000001", 00:08:50.690 "model_number": "SPDK bdev Controller", 00:08:50.690 "max_namespaces": 32, 00:08:50.690 "min_cntlid": 1, 00:08:50.690 "max_cntlid": 65519, 00:08:50.690 "namespaces": [ 00:08:50.690 { 00:08:50.690 "nsid": 1, 00:08:50.690 "bdev_name": "Null1", 00:08:50.690 "name": "Null1", 00:08:50.690 "nguid": "C8A6EAB1B75141689B16575BDC9D7787", 00:08:50.690 "uuid": "c8a6eab1-b751-4168-9b16-575bdc9d7787" 00:08:50.690 } 00:08:50.690 ] 00:08:50.690 }, 00:08:50.690 { 00:08:50.690 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.690 "subtype": "NVMe", 00:08:50.690 "listen_addresses": [ 00:08:50.690 { 00:08:50.690 "transport": "TCP", 00:08:50.690 "trtype": "TCP", 00:08:50.690 "adrfam": "IPv4", 00:08:50.690 "traddr": "10.0.0.2", 00:08:50.690 "trsvcid": "4420" 00:08:50.690 } 00:08:50.690 ], 00:08:50.690 "allow_any_host": true, 00:08:50.690 "hosts": [], 00:08:50.690 "serial_number": "SPDK00000000000002", 00:08:50.690 "model_number": "SPDK bdev Controller", 00:08:50.690 "max_namespaces": 32, 00:08:50.690 "min_cntlid": 1, 00:08:50.690 "max_cntlid": 65519, 00:08:50.690 "namespaces": [ 00:08:50.690 { 00:08:50.690 "nsid": 1, 00:08:50.690 "bdev_name": "Null2", 00:08:50.690 "name": "Null2", 00:08:50.690 "nguid": "E68FA3F3504A47438364BB2B1AF9E48A", 00:08:50.690 "uuid": "e68fa3f3-504a-4743-8364-bb2b1af9e48a" 00:08:50.690 } 00:08:50.690 ] 00:08:50.690 }, 00:08:50.690 { 00:08:50.690 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:50.690 "subtype": "NVMe", 00:08:50.690 "listen_addresses": [ 00:08:50.690 { 00:08:50.690 "transport": "TCP", 00:08:50.690 "trtype": "TCP", 00:08:50.690 "adrfam": "IPv4", 00:08:50.690 "traddr": "10.0.0.2", 00:08:50.690 "trsvcid": "4420" 00:08:50.690 } 00:08:50.690 ], 00:08:50.690 "allow_any_host": true, 00:08:50.690 "hosts": [], 00:08:50.690 "serial_number": "SPDK00000000000003", 00:08:50.690 "model_number": "SPDK bdev Controller", 00:08:50.690 "max_namespaces": 32, 00:08:50.690 "min_cntlid": 1, 00:08:50.690 "max_cntlid": 65519, 00:08:50.690 "namespaces": [ 00:08:50.690 { 00:08:50.690 "nsid": 1, 00:08:50.690 "bdev_name": "Null3", 00:08:50.690 "name": "Null3", 00:08:50.690 "nguid": "ACFBAAB4493C4113AC6999FFEDF715DD", 00:08:50.690 "uuid": "acfbaab4-493c-4113-ac69-99ffedf715dd" 00:08:50.690 } 00:08:50.690 ] 00:08:50.690 }, 00:08:50.690 { 00:08:50.690 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:50.690 "subtype": "NVMe", 00:08:50.690 "listen_addresses": [ 00:08:50.690 { 00:08:50.690 "transport": "TCP", 00:08:50.690 "trtype": "TCP", 00:08:50.690 "adrfam": "IPv4", 00:08:50.690 "traddr": "10.0.0.2", 00:08:50.690 "trsvcid": "4420" 00:08:50.690 } 00:08:50.690 ], 00:08:50.690 "allow_any_host": true, 00:08:50.690 "hosts": [], 00:08:50.690 "serial_number": "SPDK00000000000004", 00:08:50.690 "model_number": "SPDK bdev Controller", 00:08:50.690 "max_namespaces": 32, 00:08:50.690 "min_cntlid": 1, 00:08:50.690 "max_cntlid": 65519, 00:08:50.690 "namespaces": [ 00:08:50.690 { 00:08:50.690 "nsid": 1, 00:08:50.690 "bdev_name": "Null4", 00:08:50.690 "name": "Null4", 00:08:50.690 "nguid": "32BD31803B2D47EF8C392357AD2C7196", 00:08:50.690 "uuid": "32bd3180-3b2d-47ef-8c39-2357ad2c7196" 00:08:50.690 } 00:08:50.690 ] 00:08:50.690 } 00:08:50.690 ] 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@42 -- # seq 1 4 00:08:50.690 17:01:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.690 17:01:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.690 17:01:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.690 17:01:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:50.690 17:01:06 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:50.690 17:01:06 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:50.690 17:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.690 17:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:50.690 17:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.690 17:01:06 -- target/discovery.sh@49 -- # check_bdevs= 00:08:50.690 17:01:06 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:50.690 17:01:06 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:50.690 17:01:06 -- target/discovery.sh@57 -- # nvmftestfini 00:08:50.690 17:01:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:50.690 17:01:06 -- nvmf/common.sh@116 -- # sync 00:08:50.690 17:01:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:50.690 17:01:06 -- nvmf/common.sh@119 -- # set +e 00:08:50.690 17:01:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:50.690 17:01:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:50.690 rmmod nvme_tcp 00:08:50.690 rmmod nvme_fabrics 00:08:50.690 rmmod nvme_keyring 00:08:50.690 17:01:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:50.690 17:01:06 -- nvmf/common.sh@123 -- # set -e 00:08:50.690 17:01:06 -- nvmf/common.sh@124 -- # return 0 00:08:50.690 17:01:06 -- nvmf/common.sh@477 -- # '[' -n 445930 ']' 00:08:50.690 17:01:06 -- nvmf/common.sh@478 -- # killprocess 445930 00:08:50.690 17:01:06 -- common/autotest_common.sh@926 -- # '[' -z 445930 ']' 00:08:50.690 17:01:06 -- common/autotest_common.sh@930 -- # kill -0 445930 00:08:50.690 17:01:06 -- common/autotest_common.sh@931 -- # uname 00:08:50.691 17:01:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:50.691 17:01:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 445930 00:08:50.691 17:01:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:50.691 17:01:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:50.691 17:01:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 445930' 00:08:50.691 killing process with pid 445930 00:08:50.691 17:01:06 -- common/autotest_common.sh@945 -- # kill 445930 00:08:50.691 [2024-07-20 17:01:06.845555] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:50.691 17:01:06 -- common/autotest_common.sh@950 -- # wait 445930 00:08:50.950 17:01:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:50.950 17:01:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:50.950 17:01:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:50.950 17:01:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.950 17:01:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:50.950 17:01:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.950 17:01:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.950 17:01:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.481 17:01:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:53.481 00:08:53.481 real 0m6.089s 00:08:53.481 user 0m7.293s 00:08:53.481 sys 0m1.871s 00:08:53.481 17:01:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:53.481 17:01:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.481 ************************************ 00:08:53.481 END TEST nvmf_discovery 00:08:53.481 ************************************ 00:08:53.481 17:01:09 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:53.481 17:01:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:53.481 17:01:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:53.481 17:01:09 -- common/autotest_common.sh@10 -- # set +x 00:08:53.481 ************************************ 00:08:53.481 START TEST nvmf_referrals 00:08:53.481 ************************************ 00:08:53.481 17:01:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:53.481 * Looking for test storage... 00:08:53.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.481 17:01:09 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.481 17:01:09 -- nvmf/common.sh@7 -- # uname -s 00:08:53.481 17:01:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.481 17:01:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.481 17:01:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.481 17:01:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.481 17:01:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.481 17:01:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.481 17:01:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.481 17:01:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.481 17:01:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.481 17:01:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.481 17:01:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.481 17:01:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.481 17:01:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.481 17:01:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.481 17:01:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.481 17:01:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.481 17:01:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.481 17:01:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.481 17:01:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.482 17:01:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.482 17:01:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.482 17:01:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.482 17:01:09 -- paths/export.sh@5 -- # export PATH 00:08:53.482 17:01:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.482 17:01:09 -- nvmf/common.sh@46 -- # : 0 00:08:53.482 17:01:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:53.482 17:01:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:53.482 17:01:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:53.482 17:01:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.482 17:01:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.482 17:01:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:53.482 17:01:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:53.482 17:01:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:53.482 17:01:09 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:53.482 17:01:09 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:53.482 17:01:09 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:53.482 17:01:09 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:53.482 17:01:09 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:53.482 17:01:09 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:53.482 17:01:09 -- target/referrals.sh@37 -- # nvmftestinit 00:08:53.482 17:01:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:53.482 17:01:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.482 17:01:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:53.482 17:01:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:53.482 17:01:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:53.482 17:01:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.482 17:01:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.482 17:01:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.482 17:01:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:53.482 17:01:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:53.482 17:01:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:53.482 17:01:09 -- common/autotest_common.sh@10 -- # set +x 00:08:55.382 17:01:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:55.382 17:01:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:55.382 17:01:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:55.382 17:01:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:55.382 17:01:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:55.382 17:01:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:55.382 17:01:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:55.382 17:01:11 -- nvmf/common.sh@294 -- # net_devs=() 00:08:55.382 17:01:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:55.382 17:01:11 -- nvmf/common.sh@295 -- # e810=() 00:08:55.382 17:01:11 -- nvmf/common.sh@295 -- # local -ga e810 00:08:55.382 17:01:11 -- nvmf/common.sh@296 -- # x722=() 00:08:55.382 17:01:11 -- nvmf/common.sh@296 -- # local -ga x722 00:08:55.382 17:01:11 -- nvmf/common.sh@297 -- # mlx=() 00:08:55.382 17:01:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:55.382 17:01:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.382 17:01:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:55.382 17:01:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:55.382 17:01:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:55.382 17:01:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:55.382 17:01:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:55.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:55.382 17:01:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:55.382 17:01:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:55.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:55.382 17:01:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:55.382 17:01:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:55.382 17:01:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.382 17:01:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:55.382 17:01:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.382 17:01:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:55.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:55.382 17:01:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.382 17:01:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:55.382 17:01:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.382 17:01:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:55.382 17:01:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.382 17:01:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:55.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:55.382 17:01:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.382 17:01:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:55.382 17:01:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:55.382 17:01:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:55.382 17:01:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.382 17:01:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.382 17:01:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.382 17:01:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:55.382 17:01:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.382 17:01:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.382 17:01:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:55.382 17:01:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.382 17:01:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.382 17:01:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:55.382 17:01:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:55.382 17:01:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.382 17:01:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.382 17:01:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.382 17:01:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.382 17:01:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:55.382 17:01:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.382 17:01:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.382 17:01:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.382 17:01:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:55.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:08:55.382 00:08:55.382 --- 10.0.0.2 ping statistics --- 00:08:55.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.382 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:08:55.382 17:01:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:08:55.382 00:08:55.382 --- 10.0.0.1 ping statistics --- 00:08:55.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.382 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:55.382 17:01:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.382 17:01:11 -- nvmf/common.sh@410 -- # return 0 00:08:55.382 17:01:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:55.382 17:01:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.382 17:01:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:55.382 17:01:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.382 17:01:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:55.382 17:01:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:55.382 17:01:11 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:55.382 17:01:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:55.382 17:01:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:55.382 17:01:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.382 17:01:11 -- nvmf/common.sh@469 -- # nvmfpid=448052 00:08:55.382 17:01:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:55.382 17:01:11 -- nvmf/common.sh@470 -- # waitforlisten 448052 00:08:55.382 17:01:11 -- common/autotest_common.sh@819 -- # '[' -z 448052 ']' 00:08:55.382 17:01:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.382 17:01:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:55.382 17:01:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.382 17:01:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:55.382 17:01:11 -- common/autotest_common.sh@10 -- # set +x 00:08:55.382 [2024-07-20 17:01:11.484105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:55.382 [2024-07-20 17:01:11.484186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.382 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.640 [2024-07-20 17:01:11.555230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.640 [2024-07-20 17:01:11.648328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:55.640 [2024-07-20 17:01:11.648496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.640 [2024-07-20 17:01:11.648516] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.640 [2024-07-20 17:01:11.648533] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.640 [2024-07-20 17:01:11.648603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.640 [2024-07-20 17:01:11.648661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.640 [2024-07-20 17:01:11.648715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.640 [2024-07-20 17:01:11.648718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.571 17:01:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.571 17:01:12 -- common/autotest_common.sh@852 -- # return 0 00:08:56.571 17:01:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:56.571 17:01:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 17:01:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.571 17:01:12 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 [2024-07-20 17:01:12.434372] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 [2024-07-20 17:01:12.446549] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.571 17:01:12 -- target/referrals.sh@48 -- # jq length 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:56.571 17:01:12 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:56.571 17:01:12 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.571 17:01:12 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.571 17:01:12 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.571 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.571 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.571 17:01:12 -- target/referrals.sh@21 -- # sort 00:08:56.571 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:56.571 17:01:12 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:56.571 17:01:12 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:56.571 17:01:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.571 17:01:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.571 17:01:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.571 17:01:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.571 17:01:12 -- target/referrals.sh@26 -- # sort 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:56.828 17:01:12 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.828 17:01:12 -- target/referrals.sh@56 -- # jq length 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:56.828 17:01:12 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:56.828 17:01:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.828 17:01:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # sort 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # echo 00:08:56.828 17:01:12 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:56.828 17:01:12 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:56.828 17:01:12 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:56.828 17:01:12 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:56.828 17:01:12 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:56.828 17:01:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:56.828 17:01:12 -- target/referrals.sh@21 -- # sort 00:08:56.828 17:01:12 -- common/autotest_common.sh@10 -- # set +x 00:08:56.828 17:01:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:56.828 17:01:12 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:56.828 17:01:12 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:56.828 17:01:12 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:56.828 17:01:12 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:56.828 17:01:12 -- target/referrals.sh@26 -- # sort 00:08:57.085 17:01:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:57.085 17:01:13 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:57.085 17:01:13 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:57.085 17:01:13 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:57.085 17:01:13 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:57.085 17:01:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.085 17:01:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:57.085 17:01:13 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:57.085 17:01:13 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:57.085 17:01:13 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:57.085 17:01:13 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:57.085 17:01:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.085 17:01:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:57.341 17:01:13 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:57.341 17:01:13 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:57.341 17:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.341 17:01:13 -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 17:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.341 17:01:13 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:57.341 17:01:13 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:57.341 17:01:13 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.341 17:01:13 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:57.341 17:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.341 17:01:13 -- target/referrals.sh@21 -- # sort 00:08:57.341 17:01:13 -- common/autotest_common.sh@10 -- # set +x 00:08:57.341 17:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.341 17:01:13 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:57.341 17:01:13 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:57.341 17:01:13 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:57.341 17:01:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.341 17:01:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.341 17:01:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.341 17:01:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.341 17:01:13 -- target/referrals.sh@26 -- # sort 00:08:57.341 17:01:13 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:57.341 17:01:13 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:57.341 17:01:13 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:57.341 17:01:13 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:57.341 17:01:13 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:57.341 17:01:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.341 17:01:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:57.611 17:01:13 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:57.611 17:01:13 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:57.611 17:01:13 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:57.611 17:01:13 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:57.611 17:01:13 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.611 17:01:13 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:57.611 17:01:13 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:57.611 17:01:13 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:57.611 17:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.611 17:01:13 -- common/autotest_common.sh@10 -- # set +x 00:08:57.611 17:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.611 17:01:13 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:57.611 17:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.611 17:01:13 -- target/referrals.sh@82 -- # jq length 00:08:57.611 17:01:13 -- common/autotest_common.sh@10 -- # set +x 00:08:57.611 17:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.611 17:01:13 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:57.611 17:01:13 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:57.611 17:01:13 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:57.611 17:01:13 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:57.611 17:01:13 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:57.611 17:01:13 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:57.611 17:01:13 -- target/referrals.sh@26 -- # sort 00:08:57.611 17:01:13 -- target/referrals.sh@26 -- # echo 00:08:57.611 17:01:13 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:57.611 17:01:13 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:57.611 17:01:13 -- target/referrals.sh@86 -- # nvmftestfini 00:08:57.611 17:01:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:57.611 17:01:13 -- nvmf/common.sh@116 -- # sync 00:08:57.611 17:01:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:57.611 17:01:13 -- nvmf/common.sh@119 -- # set +e 00:08:57.611 17:01:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:57.611 17:01:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:57.611 rmmod nvme_tcp 00:08:57.611 rmmod nvme_fabrics 00:08:57.611 rmmod nvme_keyring 00:08:57.611 17:01:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:57.611 17:01:13 -- nvmf/common.sh@123 -- # set -e 00:08:57.611 17:01:13 -- nvmf/common.sh@124 -- # return 0 00:08:57.611 17:01:13 -- nvmf/common.sh@477 -- # '[' -n 448052 ']' 00:08:57.611 17:01:13 -- nvmf/common.sh@478 -- # killprocess 448052 00:08:57.611 17:01:13 -- common/autotest_common.sh@926 -- # '[' -z 448052 ']' 00:08:57.611 17:01:13 -- common/autotest_common.sh@930 -- # kill -0 448052 00:08:57.611 17:01:13 -- common/autotest_common.sh@931 -- # uname 00:08:57.611 17:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:57.611 17:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 448052 00:08:57.611 17:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:57.611 17:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:57.611 17:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 448052' 00:08:57.611 killing process with pid 448052 00:08:57.611 17:01:13 -- common/autotest_common.sh@945 -- # kill 448052 00:08:57.611 17:01:13 -- common/autotest_common.sh@950 -- # wait 448052 00:08:57.872 17:01:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:57.872 17:01:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:57.872 17:01:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:57.872 17:01:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.872 17:01:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:57.872 17:01:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.872 17:01:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.872 17:01:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.406 17:01:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:00.406 00:09:00.406 real 0m6.884s 00:09:00.406 user 0m10.828s 00:09:00.406 sys 0m2.037s 00:09:00.406 17:01:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.406 17:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.406 ************************************ 00:09:00.406 END TEST nvmf_referrals 00:09:00.406 ************************************ 00:09:00.406 17:01:16 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.406 17:01:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:00.406 17:01:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:00.406 17:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:00.406 ************************************ 00:09:00.406 START TEST nvmf_connect_disconnect 00:09:00.406 ************************************ 00:09:00.406 17:01:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.406 * Looking for test storage... 00:09:00.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.406 17:01:16 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.406 17:01:16 -- nvmf/common.sh@7 -- # uname -s 00:09:00.406 17:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.406 17:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.406 17:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.406 17:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.406 17:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.406 17:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.406 17:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.406 17:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.406 17:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.406 17:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.406 17:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.406 17:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:00.406 17:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.406 17:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.406 17:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.406 17:01:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.406 17:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.406 17:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.406 17:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.406 17:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.406 17:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.406 17:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.406 17:01:16 -- paths/export.sh@5 -- # export PATH 00:09:00.406 17:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.406 17:01:16 -- nvmf/common.sh@46 -- # : 0 00:09:00.406 17:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:00.406 17:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:00.406 17:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:00.406 17:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.406 17:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.406 17:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:00.406 17:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:00.406 17:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:00.406 17:01:16 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.406 17:01:16 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.406 17:01:16 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:00.406 17:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:00.406 17:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.406 17:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:00.406 17:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:00.406 17:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:00.406 17:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.406 17:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.406 17:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.406 17:01:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:00.406 17:01:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:00.406 17:01:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:00.406 17:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:02.305 17:01:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:02.305 17:01:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:02.305 17:01:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:02.305 17:01:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:02.305 17:01:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:02.305 17:01:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:02.305 17:01:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:02.305 17:01:18 -- nvmf/common.sh@294 -- # net_devs=() 00:09:02.305 17:01:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:02.305 17:01:18 -- nvmf/common.sh@295 -- # e810=() 00:09:02.305 17:01:18 -- nvmf/common.sh@295 -- # local -ga e810 00:09:02.305 17:01:18 -- nvmf/common.sh@296 -- # x722=() 00:09:02.305 17:01:18 -- nvmf/common.sh@296 -- # local -ga x722 00:09:02.305 17:01:18 -- nvmf/common.sh@297 -- # mlx=() 00:09:02.305 17:01:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:02.305 17:01:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.305 17:01:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:02.305 17:01:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:02.305 17:01:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:02.305 17:01:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:02.305 17:01:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:02.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:02.305 17:01:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:02.305 17:01:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:02.305 17:01:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:02.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:02.305 17:01:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:02.306 17:01:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:02.306 17:01:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.306 17:01:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:02.306 17:01:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.306 17:01:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:02.306 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:02.306 17:01:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.306 17:01:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:02.306 17:01:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.306 17:01:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:02.306 17:01:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.306 17:01:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:02.306 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:02.306 17:01:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.306 17:01:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:02.306 17:01:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:02.306 17:01:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:02.306 17:01:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.306 17:01:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.306 17:01:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.306 17:01:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:02.306 17:01:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.306 17:01:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.306 17:01:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:02.306 17:01:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.306 17:01:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.306 17:01:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:02.306 17:01:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:02.306 17:01:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.306 17:01:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.306 17:01:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.306 17:01:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.306 17:01:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:02.306 17:01:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.306 17:01:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.306 17:01:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.306 17:01:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:02.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:09:02.306 00:09:02.306 --- 10.0.0.2 ping statistics --- 00:09:02.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.306 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:02.306 17:01:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:09:02.306 00:09:02.306 --- 10.0.0.1 ping statistics --- 00:09:02.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.306 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:02.306 17:01:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.306 17:01:18 -- nvmf/common.sh@410 -- # return 0 00:09:02.306 17:01:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:02.306 17:01:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.306 17:01:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:02.306 17:01:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.306 17:01:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:02.306 17:01:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:02.306 17:01:18 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:02.306 17:01:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:02.306 17:01:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:02.306 17:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:02.306 17:01:18 -- nvmf/common.sh@469 -- # nvmfpid=450373 00:09:02.306 17:01:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.306 17:01:18 -- nvmf/common.sh@470 -- # waitforlisten 450373 00:09:02.306 17:01:18 -- common/autotest_common.sh@819 -- # '[' -z 450373 ']' 00:09:02.306 17:01:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.306 17:01:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:02.306 17:01:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.306 17:01:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:02.306 17:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:02.306 [2024-07-20 17:01:18.370813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:02.306 [2024-07-20 17:01:18.370903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.306 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.306 [2024-07-20 17:01:18.441603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.564 [2024-07-20 17:01:18.535023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.564 [2024-07-20 17:01:18.535201] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.564 [2024-07-20 17:01:18.535221] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.564 [2024-07-20 17:01:18.535236] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.564 [2024-07-20 17:01:18.535299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.564 [2024-07-20 17:01:18.535354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.564 [2024-07-20 17:01:18.535403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:02.564 [2024-07-20 17:01:18.535406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.493 17:01:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:03.493 17:01:19 -- common/autotest_common.sh@852 -- # return 0 00:09:03.493 17:01:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:03.493 17:01:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:03.493 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.493 17:01:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:03.493 17:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.493 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.493 [2024-07-20 17:01:19.342341] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.493 17:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:03.493 17:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.493 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.493 17:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.493 17:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.493 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.493 17:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.493 17:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.493 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.493 17:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.493 17:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:03.493 17:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.493 [2024-07-20 17:01:19.398113] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.493 17:01:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:03.493 17:01:19 -- target/connect_disconnect.sh@34 -- # set +x 00:09:06.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.829 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.239 17:05:06 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:51.239 17:05:06 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:51.239 17:05:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:51.239 17:05:06 -- nvmf/common.sh@116 -- # sync 00:12:51.239 17:05:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:51.239 17:05:06 -- nvmf/common.sh@119 -- # set +e 00:12:51.239 17:05:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:51.239 17:05:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:51.239 rmmod nvme_tcp 00:12:51.239 rmmod nvme_fabrics 00:12:51.239 rmmod nvme_keyring 00:12:51.239 17:05:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:51.239 17:05:06 -- nvmf/common.sh@123 -- # set -e 00:12:51.239 17:05:06 -- nvmf/common.sh@124 -- # return 0 00:12:51.239 17:05:06 -- nvmf/common.sh@477 -- # '[' -n 450373 ']' 00:12:51.239 17:05:06 -- nvmf/common.sh@478 -- # killprocess 450373 00:12:51.239 17:05:06 -- common/autotest_common.sh@926 -- # '[' -z 450373 ']' 00:12:51.239 17:05:06 -- common/autotest_common.sh@930 -- # kill -0 450373 00:12:51.239 17:05:06 -- common/autotest_common.sh@931 -- # uname 00:12:51.239 17:05:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:51.239 17:05:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 450373 00:12:51.239 17:05:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:51.239 17:05:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:51.239 17:05:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 450373' 00:12:51.239 killing process with pid 450373 00:12:51.239 17:05:06 -- common/autotest_common.sh@945 -- # kill 450373 00:12:51.239 17:05:06 -- common/autotest_common.sh@950 -- # wait 450373 00:12:51.239 17:05:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:51.239 17:05:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:51.239 17:05:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:51.239 17:05:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:51.239 17:05:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:51.239 17:05:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.239 17:05:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.239 17:05:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.139 17:05:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:53.139 00:12:53.139 real 3m53.098s 00:12:53.139 user 14m46.786s 00:12:53.139 sys 0m31.350s 00:12:53.139 17:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.139 17:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:53.139 ************************************ 00:12:53.139 END TEST nvmf_connect_disconnect 00:12:53.139 ************************************ 00:12:53.139 17:05:09 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.139 17:05:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:53.139 17:05:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:53.139 17:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:53.139 ************************************ 00:12:53.139 START TEST nvmf_multitarget 00:12:53.139 ************************************ 00:12:53.139 17:05:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:53.139 * Looking for test storage... 00:12:53.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.139 17:05:09 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.139 17:05:09 -- nvmf/common.sh@7 -- # uname -s 00:12:53.139 17:05:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.139 17:05:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.139 17:05:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.139 17:05:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.139 17:05:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.139 17:05:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.139 17:05:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.139 17:05:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.139 17:05:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.139 17:05:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.140 17:05:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.140 17:05:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.140 17:05:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.140 17:05:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.140 17:05:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.140 17:05:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.140 17:05:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.140 17:05:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.140 17:05:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.140 17:05:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.140 17:05:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.140 17:05:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.140 17:05:09 -- paths/export.sh@5 -- # export PATH 00:12:53.140 17:05:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.140 17:05:09 -- nvmf/common.sh@46 -- # : 0 00:12:53.140 17:05:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:53.140 17:05:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:53.140 17:05:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:53.140 17:05:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.140 17:05:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.140 17:05:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:53.140 17:05:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:53.140 17:05:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:53.140 17:05:09 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:53.140 17:05:09 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:53.140 17:05:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:53.140 17:05:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.140 17:05:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:53.140 17:05:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:53.140 17:05:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:53.140 17:05:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.140 17:05:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.140 17:05:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.140 17:05:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:53.140 17:05:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:53.140 17:05:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:53.140 17:05:09 -- common/autotest_common.sh@10 -- # set +x 00:12:55.042 17:05:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:55.042 17:05:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:55.042 17:05:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:55.042 17:05:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:55.042 17:05:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:55.042 17:05:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:55.042 17:05:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:55.042 17:05:11 -- nvmf/common.sh@294 -- # net_devs=() 00:12:55.042 17:05:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:55.042 17:05:11 -- nvmf/common.sh@295 -- # e810=() 00:12:55.042 17:05:11 -- nvmf/common.sh@295 -- # local -ga e810 00:12:55.042 17:05:11 -- nvmf/common.sh@296 -- # x722=() 00:12:55.042 17:05:11 -- nvmf/common.sh@296 -- # local -ga x722 00:12:55.042 17:05:11 -- nvmf/common.sh@297 -- # mlx=() 00:12:55.042 17:05:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:55.042 17:05:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.042 17:05:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:55.042 17:05:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:55.042 17:05:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:55.042 17:05:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:55.042 17:05:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:55.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:55.042 17:05:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:55.042 17:05:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:55.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:55.042 17:05:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:55.042 17:05:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:55.042 17:05:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:55.042 17:05:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.042 17:05:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:55.042 17:05:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.042 17:05:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:55.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:55.042 17:05:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.042 17:05:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:55.042 17:05:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.042 17:05:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:55.042 17:05:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.042 17:05:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:55.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:55.042 17:05:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.042 17:05:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:55.042 17:05:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:55.042 17:05:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:55.043 17:05:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:55.043 17:05:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:55.043 17:05:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.043 17:05:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.043 17:05:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.043 17:05:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:55.043 17:05:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.043 17:05:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.043 17:05:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:55.043 17:05:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.043 17:05:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.043 17:05:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:55.300 17:05:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:55.300 17:05:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.300 17:05:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.300 17:05:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.300 17:05:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.300 17:05:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:55.300 17:05:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.300 17:05:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.300 17:05:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.300 17:05:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:55.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:12:55.300 00:12:55.300 --- 10.0.0.2 ping statistics --- 00:12:55.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.300 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:12:55.300 17:05:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:55.300 00:12:55.300 --- 10.0.0.1 ping statistics --- 00:12:55.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.300 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:55.300 17:05:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.300 17:05:11 -- nvmf/common.sh@410 -- # return 0 00:12:55.300 17:05:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:55.300 17:05:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.300 17:05:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:55.300 17:05:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:55.300 17:05:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.300 17:05:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:55.300 17:05:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:55.300 17:05:11 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:55.300 17:05:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:55.300 17:05:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:55.300 17:05:11 -- common/autotest_common.sh@10 -- # set +x 00:12:55.300 17:05:11 -- nvmf/common.sh@469 -- # nvmfpid=481955 00:12:55.300 17:05:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.300 17:05:11 -- nvmf/common.sh@470 -- # waitforlisten 481955 00:12:55.300 17:05:11 -- common/autotest_common.sh@819 -- # '[' -z 481955 ']' 00:12:55.300 17:05:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.300 17:05:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:55.300 17:05:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.300 17:05:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:55.300 17:05:11 -- common/autotest_common.sh@10 -- # set +x 00:12:55.300 [2024-07-20 17:05:11.405588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:55.300 [2024-07-20 17:05:11.405664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.300 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.558 [2024-07-20 17:05:11.473549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.558 [2024-07-20 17:05:11.562060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:55.558 [2024-07-20 17:05:11.562200] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.558 [2024-07-20 17:05:11.562217] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.558 [2024-07-20 17:05:11.562229] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.558 [2024-07-20 17:05:11.562287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.558 [2024-07-20 17:05:11.562345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.558 [2024-07-20 17:05:11.562389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.558 [2024-07-20 17:05:11.562392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.489 17:05:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:56.489 17:05:12 -- common/autotest_common.sh@852 -- # return 0 00:12:56.489 17:05:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:56.489 17:05:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:56.489 17:05:12 -- common/autotest_common.sh@10 -- # set +x 00:12:56.489 17:05:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.489 17:05:12 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:56.489 17:05:12 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.489 17:05:12 -- target/multitarget.sh@21 -- # jq length 00:12:56.489 17:05:12 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:56.489 17:05:12 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:56.489 "nvmf_tgt_1" 00:12:56.489 17:05:12 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:56.746 "nvmf_tgt_2" 00:12:56.746 17:05:12 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:56.746 17:05:12 -- target/multitarget.sh@28 -- # jq length 00:12:56.746 17:05:12 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:56.746 17:05:12 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:57.017 true 00:12:57.017 17:05:12 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:57.017 true 00:12:57.017 17:05:13 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.017 17:05:13 -- target/multitarget.sh@35 -- # jq length 00:12:57.279 17:05:13 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:57.279 17:05:13 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:57.279 17:05:13 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:57.279 17:05:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:57.279 17:05:13 -- nvmf/common.sh@116 -- # sync 00:12:57.279 17:05:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:57.279 17:05:13 -- nvmf/common.sh@119 -- # set +e 00:12:57.279 17:05:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:57.279 17:05:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:57.279 rmmod nvme_tcp 00:12:57.279 rmmod nvme_fabrics 00:12:57.279 rmmod nvme_keyring 00:12:57.279 17:05:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:57.279 17:05:13 -- nvmf/common.sh@123 -- # set -e 00:12:57.279 17:05:13 -- nvmf/common.sh@124 -- # return 0 00:12:57.279 17:05:13 -- nvmf/common.sh@477 -- # '[' -n 481955 ']' 00:12:57.279 17:05:13 -- nvmf/common.sh@478 -- # killprocess 481955 00:12:57.279 17:05:13 -- common/autotest_common.sh@926 -- # '[' -z 481955 ']' 00:12:57.279 17:05:13 -- common/autotest_common.sh@930 -- # kill -0 481955 00:12:57.279 17:05:13 -- common/autotest_common.sh@931 -- # uname 00:12:57.279 17:05:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:57.279 17:05:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 481955 00:12:57.279 17:05:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:57.279 17:05:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:57.279 17:05:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 481955' 00:12:57.279 killing process with pid 481955 00:12:57.279 17:05:13 -- common/autotest_common.sh@945 -- # kill 481955 00:12:57.279 17:05:13 -- common/autotest_common.sh@950 -- # wait 481955 00:12:57.536 17:05:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:57.536 17:05:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:57.536 17:05:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:57.536 17:05:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.536 17:05:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:57.536 17:05:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.536 17:05:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.536 17:05:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.438 17:05:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:59.438 00:12:59.438 real 0m6.333s 00:12:59.438 user 0m9.297s 00:12:59.438 sys 0m1.917s 00:12:59.438 17:05:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.438 17:05:15 -- common/autotest_common.sh@10 -- # set +x 00:12:59.438 ************************************ 00:12:59.438 END TEST nvmf_multitarget 00:12:59.438 ************************************ 00:12:59.438 17:05:15 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:59.438 17:05:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:59.438 17:05:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.438 17:05:15 -- common/autotest_common.sh@10 -- # set +x 00:12:59.438 ************************************ 00:12:59.438 START TEST nvmf_rpc 00:12:59.438 ************************************ 00:12:59.438 17:05:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:59.696 * Looking for test storage... 00:12:59.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.696 17:05:15 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.696 17:05:15 -- nvmf/common.sh@7 -- # uname -s 00:12:59.696 17:05:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.696 17:05:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.696 17:05:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.696 17:05:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.696 17:05:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.696 17:05:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.696 17:05:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.696 17:05:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.696 17:05:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.696 17:05:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.696 17:05:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.696 17:05:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.696 17:05:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.696 17:05:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.696 17:05:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.696 17:05:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.696 17:05:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.696 17:05:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.696 17:05:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.696 17:05:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.696 17:05:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.696 17:05:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.696 17:05:15 -- paths/export.sh@5 -- # export PATH 00:12:59.696 17:05:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.696 17:05:15 -- nvmf/common.sh@46 -- # : 0 00:12:59.696 17:05:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.696 17:05:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.696 17:05:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.696 17:05:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.696 17:05:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.697 17:05:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.697 17:05:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.697 17:05:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.697 17:05:15 -- target/rpc.sh@11 -- # loops=5 00:12:59.697 17:05:15 -- target/rpc.sh@23 -- # nvmftestinit 00:12:59.697 17:05:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.697 17:05:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.697 17:05:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.697 17:05:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.697 17:05:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.697 17:05:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.697 17:05:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.697 17:05:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.697 17:05:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:59.697 17:05:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:59.697 17:05:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:59.697 17:05:15 -- common/autotest_common.sh@10 -- # set +x 00:13:01.599 17:05:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:01.599 17:05:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:01.599 17:05:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:01.599 17:05:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:01.599 17:05:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:01.599 17:05:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:01.599 17:05:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:01.599 17:05:17 -- nvmf/common.sh@294 -- # net_devs=() 00:13:01.599 17:05:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:01.599 17:05:17 -- nvmf/common.sh@295 -- # e810=() 00:13:01.599 17:05:17 -- nvmf/common.sh@295 -- # local -ga e810 00:13:01.599 17:05:17 -- nvmf/common.sh@296 -- # x722=() 00:13:01.599 17:05:17 -- nvmf/common.sh@296 -- # local -ga x722 00:13:01.599 17:05:17 -- nvmf/common.sh@297 -- # mlx=() 00:13:01.599 17:05:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:01.599 17:05:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.599 17:05:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:01.599 17:05:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:01.599 17:05:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:01.599 17:05:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:01.599 17:05:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:01.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:01.599 17:05:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:01.599 17:05:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:01.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:01.599 17:05:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:01.599 17:05:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:01.599 17:05:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.599 17:05:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:01.599 17:05:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.599 17:05:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:01.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:01.599 17:05:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.599 17:05:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:01.599 17:05:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.599 17:05:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:01.599 17:05:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.599 17:05:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:01.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:01.599 17:05:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.599 17:05:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:01.599 17:05:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:01.599 17:05:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:01.599 17:05:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:01.599 17:05:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.599 17:05:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.599 17:05:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.599 17:05:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:01.599 17:05:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.599 17:05:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.599 17:05:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:01.599 17:05:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.599 17:05:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.599 17:05:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:01.599 17:05:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:01.599 17:05:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.599 17:05:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.599 17:05:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.599 17:05:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.599 17:05:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:01.599 17:05:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.599 17:05:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.599 17:05:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.599 17:05:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:01.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:13:01.599 00:13:01.599 --- 10.0.0.2 ping statistics --- 00:13:01.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.599 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:13:01.600 17:05:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:01.600 00:13:01.600 --- 10.0.0.1 ping statistics --- 00:13:01.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.600 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:01.600 17:05:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.600 17:05:17 -- nvmf/common.sh@410 -- # return 0 00:13:01.600 17:05:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:01.600 17:05:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.600 17:05:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:01.600 17:05:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:01.600 17:05:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.600 17:05:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:01.600 17:05:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:01.600 17:05:17 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:01.600 17:05:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:01.600 17:05:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:01.600 17:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:01.600 17:05:17 -- nvmf/common.sh@469 -- # nvmfpid=484209 00:13:01.600 17:05:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.600 17:05:17 -- nvmf/common.sh@470 -- # waitforlisten 484209 00:13:01.600 17:05:17 -- common/autotest_common.sh@819 -- # '[' -z 484209 ']' 00:13:01.600 17:05:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.600 17:05:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:01.600 17:05:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.600 17:05:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:01.600 17:05:17 -- common/autotest_common.sh@10 -- # set +x 00:13:01.600 [2024-07-20 17:05:17.693790] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:01.600 [2024-07-20 17:05:17.693876] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.600 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.858 [2024-07-20 17:05:17.767983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.858 [2024-07-20 17:05:17.859343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:01.858 [2024-07-20 17:05:17.859505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.858 [2024-07-20 17:05:17.859523] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.858 [2024-07-20 17:05:17.859535] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.858 [2024-07-20 17:05:17.859598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.858 [2024-07-20 17:05:17.859671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.858 [2024-07-20 17:05:17.859790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.858 [2024-07-20 17:05:17.859800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.792 17:05:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:02.792 17:05:18 -- common/autotest_common.sh@852 -- # return 0 00:13:02.792 17:05:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:02.792 17:05:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:02.792 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.792 17:05:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.792 17:05:18 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:02.792 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.792 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.792 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.792 17:05:18 -- target/rpc.sh@26 -- # stats='{ 00:13:02.792 "tick_rate": 2700000000, 00:13:02.792 "poll_groups": [ 00:13:02.792 { 00:13:02.792 "name": "nvmf_tgt_poll_group_0", 00:13:02.792 "admin_qpairs": 0, 00:13:02.792 "io_qpairs": 0, 00:13:02.792 "current_admin_qpairs": 0, 00:13:02.792 "current_io_qpairs": 0, 00:13:02.792 "pending_bdev_io": 0, 00:13:02.792 "completed_nvme_io": 0, 00:13:02.792 "transports": [] 00:13:02.792 }, 00:13:02.792 { 00:13:02.792 "name": "nvmf_tgt_poll_group_1", 00:13:02.792 "admin_qpairs": 0, 00:13:02.792 "io_qpairs": 0, 00:13:02.792 "current_admin_qpairs": 0, 00:13:02.792 "current_io_qpairs": 0, 00:13:02.792 "pending_bdev_io": 0, 00:13:02.792 "completed_nvme_io": 0, 00:13:02.792 "transports": [] 00:13:02.792 }, 00:13:02.792 { 00:13:02.792 "name": "nvmf_tgt_poll_group_2", 00:13:02.792 "admin_qpairs": 0, 00:13:02.792 "io_qpairs": 0, 00:13:02.792 "current_admin_qpairs": 0, 00:13:02.792 "current_io_qpairs": 0, 00:13:02.792 "pending_bdev_io": 0, 00:13:02.792 "completed_nvme_io": 0, 00:13:02.792 "transports": [] 00:13:02.792 }, 00:13:02.792 { 00:13:02.792 "name": "nvmf_tgt_poll_group_3", 00:13:02.792 "admin_qpairs": 0, 00:13:02.793 "io_qpairs": 0, 00:13:02.793 "current_admin_qpairs": 0, 00:13:02.793 "current_io_qpairs": 0, 00:13:02.793 "pending_bdev_io": 0, 00:13:02.793 "completed_nvme_io": 0, 00:13:02.793 "transports": [] 00:13:02.793 } 00:13:02.793 ] 00:13:02.793 }' 00:13:02.793 17:05:18 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:02.793 17:05:18 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:02.793 17:05:18 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:02.793 17:05:18 -- target/rpc.sh@15 -- # wc -l 00:13:02.793 17:05:18 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:02.793 17:05:18 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:02.793 17:05:18 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:02.793 17:05:18 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 [2024-07-20 17:05:18.794761] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@33 -- # stats='{ 00:13:02.793 "tick_rate": 2700000000, 00:13:02.793 "poll_groups": [ 00:13:02.793 { 00:13:02.793 "name": "nvmf_tgt_poll_group_0", 00:13:02.793 "admin_qpairs": 0, 00:13:02.793 "io_qpairs": 0, 00:13:02.793 "current_admin_qpairs": 0, 00:13:02.793 "current_io_qpairs": 0, 00:13:02.793 "pending_bdev_io": 0, 00:13:02.793 "completed_nvme_io": 0, 00:13:02.793 "transports": [ 00:13:02.793 { 00:13:02.793 "trtype": "TCP" 00:13:02.793 } 00:13:02.793 ] 00:13:02.793 }, 00:13:02.793 { 00:13:02.793 "name": "nvmf_tgt_poll_group_1", 00:13:02.793 "admin_qpairs": 0, 00:13:02.793 "io_qpairs": 0, 00:13:02.793 "current_admin_qpairs": 0, 00:13:02.793 "current_io_qpairs": 0, 00:13:02.793 "pending_bdev_io": 0, 00:13:02.793 "completed_nvme_io": 0, 00:13:02.793 "transports": [ 00:13:02.793 { 00:13:02.793 "trtype": "TCP" 00:13:02.793 } 00:13:02.793 ] 00:13:02.793 }, 00:13:02.793 { 00:13:02.793 "name": "nvmf_tgt_poll_group_2", 00:13:02.793 "admin_qpairs": 0, 00:13:02.793 "io_qpairs": 0, 00:13:02.793 "current_admin_qpairs": 0, 00:13:02.793 "current_io_qpairs": 0, 00:13:02.793 "pending_bdev_io": 0, 00:13:02.793 "completed_nvme_io": 0, 00:13:02.793 "transports": [ 00:13:02.793 { 00:13:02.793 "trtype": "TCP" 00:13:02.793 } 00:13:02.793 ] 00:13:02.793 }, 00:13:02.793 { 00:13:02.793 "name": "nvmf_tgt_poll_group_3", 00:13:02.793 "admin_qpairs": 0, 00:13:02.793 "io_qpairs": 0, 00:13:02.793 "current_admin_qpairs": 0, 00:13:02.793 "current_io_qpairs": 0, 00:13:02.793 "pending_bdev_io": 0, 00:13:02.793 "completed_nvme_io": 0, 00:13:02.793 "transports": [ 00:13:02.793 { 00:13:02.793 "trtype": "TCP" 00:13:02.793 } 00:13:02.793 ] 00:13:02.793 } 00:13:02.793 ] 00:13:02.793 }' 00:13:02.793 17:05:18 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.793 17:05:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.793 17:05:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.793 17:05:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.793 17:05:18 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:02.793 17:05:18 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:02.793 17:05:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:02.793 17:05:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:02.793 17:05:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.793 17:05:18 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:02.793 17:05:18 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:02.793 17:05:18 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:02.793 17:05:18 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:02.793 17:05:18 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 Malloc1 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.793 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.793 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:02.793 [2024-07-20 17:05:18.938393] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.793 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.793 17:05:18 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:02.793 17:05:18 -- common/autotest_common.sh@640 -- # local es=0 00:13:02.793 17:05:18 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:02.793 17:05:18 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:02.793 17:05:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:02.793 17:05:18 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:02.793 17:05:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:02.793 17:05:18 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:02.793 17:05:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:02.793 17:05:18 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:02.793 17:05:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:02.793 17:05:18 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:03.051 [2024-07-20 17:05:18.961020] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:03.051 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:03.051 could not add new controller: failed to write to nvme-fabrics device 00:13:03.051 17:05:18 -- common/autotest_common.sh@643 -- # es=1 00:13:03.051 17:05:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:03.051 17:05:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:03.051 17:05:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:03.051 17:05:18 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.051 17:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.051 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:03.051 17:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.051 17:05:18 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.616 17:05:19 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.616 17:05:19 -- common/autotest_common.sh@1177 -- # local i=0 00:13:03.616 17:05:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.616 17:05:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:03.616 17:05:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:05.512 17:05:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:05.512 17:05:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:05.512 17:05:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.512 17:05:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:05.512 17:05:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.512 17:05:21 -- common/autotest_common.sh@1187 -- # return 0 00:13:05.512 17:05:21 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.512 17:05:21 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.512 17:05:21 -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.512 17:05:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:05.512 17:05:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.796 17:05:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:05.796 17:05:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.796 17:05:21 -- common/autotest_common.sh@1210 -- # return 0 00:13:05.796 17:05:21 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:05.796 17:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.796 17:05:21 -- common/autotest_common.sh@10 -- # set +x 00:13:05.796 17:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.796 17:05:21 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.796 17:05:21 -- common/autotest_common.sh@640 -- # local es=0 00:13:05.796 17:05:21 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.797 17:05:21 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:05.797 17:05:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:05.797 17:05:21 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:05.797 17:05:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:05.797 17:05:21 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:05.797 17:05:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:05.797 17:05:21 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:05.797 17:05:21 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:05.797 17:05:21 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:05.797 [2024-07-20 17:05:21.706803] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:05.797 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:05.797 could not add new controller: failed to write to nvme-fabrics device 00:13:05.797 17:05:21 -- common/autotest_common.sh@643 -- # es=1 00:13:05.797 17:05:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:05.797 17:05:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:05.797 17:05:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:05.797 17:05:21 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:05.797 17:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.797 17:05:21 -- common/autotest_common.sh@10 -- # set +x 00:13:05.797 17:05:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.797 17:05:21 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.359 17:05:22 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.359 17:05:22 -- common/autotest_common.sh@1177 -- # local i=0 00:13:06.360 17:05:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.360 17:05:22 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:06.360 17:05:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:08.253 17:05:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:08.253 17:05:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:08.253 17:05:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.253 17:05:24 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:08.253 17:05:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.253 17:05:24 -- common/autotest_common.sh@1187 -- # return 0 00:13:08.253 17:05:24 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.253 17:05:24 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.253 17:05:24 -- common/autotest_common.sh@1198 -- # local i=0 00:13:08.253 17:05:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:08.253 17:05:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.253 17:05:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:08.253 17:05:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.253 17:05:24 -- common/autotest_common.sh@1210 -- # return 0 00:13:08.253 17:05:24 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.253 17:05:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.253 17:05:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.253 17:05:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.253 17:05:24 -- target/rpc.sh@81 -- # seq 1 5 00:13:08.253 17:05:24 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.253 17:05:24 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.253 17:05:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.253 17:05:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.253 17:05:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.253 17:05:24 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.253 17:05:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.253 17:05:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.253 [2024-07-20 17:05:24.369674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.253 17:05:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.253 17:05:24 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.253 17:05:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.253 17:05:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.253 17:05:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.253 17:05:24 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.253 17:05:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:08.253 17:05:24 -- common/autotest_common.sh@10 -- # set +x 00:13:08.253 17:05:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:08.253 17:05:24 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:08.817 17:05:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.817 17:05:24 -- common/autotest_common.sh@1177 -- # local i=0 00:13:08.817 17:05:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.817 17:05:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:08.817 17:05:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:11.341 17:05:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:11.341 17:05:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:11.341 17:05:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.341 17:05:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:11.341 17:05:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.341 17:05:26 -- common/autotest_common.sh@1187 -- # return 0 00:13:11.341 17:05:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.341 17:05:27 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.341 17:05:27 -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.341 17:05:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:11.341 17:05:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.341 17:05:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:11.341 17:05:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.341 17:05:27 -- common/autotest_common.sh@1210 -- # return 0 00:13:11.341 17:05:27 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.341 17:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.341 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.341 17:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.341 17:05:27 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.341 17:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.341 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.341 17:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.341 17:05:27 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.341 17:05:27 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.341 17:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.341 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.341 17:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.341 17:05:27 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.341 17:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.341 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.341 [2024-07-20 17:05:27.050942] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.341 17:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.341 17:05:27 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.341 17:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.341 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.341 17:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.341 17:05:27 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.341 17:05:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:11.341 17:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:11.341 17:05:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:11.341 17:05:27 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.598 17:05:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.598 17:05:27 -- common/autotest_common.sh@1177 -- # local i=0 00:13:11.598 17:05:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.598 17:05:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:11.598 17:05:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:14.177 17:05:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:14.177 17:05:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:14.178 17:05:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.178 17:05:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:14.178 17:05:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.178 17:05:29 -- common/autotest_common.sh@1187 -- # return 0 00:13:14.178 17:05:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.178 17:05:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.178 17:05:29 -- common/autotest_common.sh@1198 -- # local i=0 00:13:14.178 17:05:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:14.178 17:05:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.178 17:05:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:14.178 17:05:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.178 17:05:29 -- common/autotest_common.sh@1210 -- # return 0 00:13:14.178 17:05:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.178 17:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.178 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 17:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.178 17:05:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.178 17:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.178 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 17:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.178 17:05:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.178 17:05:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.178 17:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.178 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 17:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.178 17:05:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.178 17:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.178 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 [2024-07-20 17:05:29.805314] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.178 17:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.178 17:05:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.178 17:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.178 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 17:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.178 17:05:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.178 17:05:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:14.178 17:05:29 -- common/autotest_common.sh@10 -- # set +x 00:13:14.178 17:05:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:14.178 17:05:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.436 17:05:30 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.436 17:05:30 -- common/autotest_common.sh@1177 -- # local i=0 00:13:14.436 17:05:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.436 17:05:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:14.436 17:05:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:16.334 17:05:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:16.334 17:05:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:16.334 17:05:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.334 17:05:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:16.334 17:05:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.334 17:05:32 -- common/autotest_common.sh@1187 -- # return 0 00:13:16.334 17:05:32 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.593 17:05:32 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.593 17:05:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:16.593 17:05:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:16.593 17:05:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.593 17:05:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:16.593 17:05:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.593 17:05:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:16.593 17:05:32 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.593 17:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.593 17:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.593 17:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.593 17:05:32 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.593 17:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.593 17:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.593 17:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.593 17:05:32 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.593 17:05:32 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.593 17:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.593 17:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.593 17:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.593 17:05:32 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.593 17:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.593 17:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.593 [2024-07-20 17:05:32.567822] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.593 17:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.593 17:05:32 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.593 17:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.593 17:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.593 17:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.593 17:05:32 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.593 17:05:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.593 17:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:16.593 17:05:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.593 17:05:32 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.158 17:05:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.158 17:05:33 -- common/autotest_common.sh@1177 -- # local i=0 00:13:17.158 17:05:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.158 17:05:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:17.158 17:05:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:19.052 17:05:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:19.052 17:05:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:19.052 17:05:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.052 17:05:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:19.052 17:05:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.052 17:05:35 -- common/autotest_common.sh@1187 -- # return 0 00:13:19.052 17:05:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.310 17:05:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.310 17:05:35 -- common/autotest_common.sh@1198 -- # local i=0 00:13:19.310 17:05:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:19.310 17:05:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.310 17:05:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:19.310 17:05:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.310 17:05:35 -- common/autotest_common.sh@1210 -- # return 0 00:13:19.310 17:05:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.310 17:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.310 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 17:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.310 17:05:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.310 17:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.310 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 17:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.310 17:05:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.310 17:05:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.310 17:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.310 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 17:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.310 17:05:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.310 17:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.310 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 [2024-07-20 17:05:35.332876] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.310 17:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.310 17:05:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.310 17:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.310 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 17:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.310 17:05:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.310 17:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:19.310 17:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:19.310 17:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:19.310 17:05:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.875 17:05:35 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.875 17:05:35 -- common/autotest_common.sh@1177 -- # local i=0 00:13:19.875 17:05:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.875 17:05:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:19.875 17:05:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:21.770 17:05:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:21.770 17:05:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:21.770 17:05:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.770 17:05:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:21.770 17:05:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.770 17:05:37 -- common/autotest_common.sh@1187 -- # return 0 00:13:21.770 17:05:37 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.028 17:05:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.028 17:05:37 -- common/autotest_common.sh@1198 -- # local i=0 00:13:22.028 17:05:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:22.028 17:05:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.028 17:05:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:22.028 17:05:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.028 17:05:37 -- common/autotest_common.sh@1210 -- # return 0 00:13:22.028 17:05:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.028 17:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:37 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@99 -- # seq 1 5 00:13:22.028 17:05:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.028 17:05:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 [2024-07-20 17:05:38.033656] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.028 17:05:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 [2024-07-20 17:05:38.081729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.028 17:05:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 [2024-07-20 17:05:38.129893] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.028 17:05:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.028 [2024-07-20 17:05:38.178095] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.028 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.028 17:05:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.028 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.028 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:22.285 17:05:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 [2024-07-20 17:05:38.226248] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:22.285 17:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:22.285 17:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:22.285 17:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:22.285 17:05:38 -- target/rpc.sh@110 -- # stats='{ 00:13:22.285 "tick_rate": 2700000000, 00:13:22.285 "poll_groups": [ 00:13:22.285 { 00:13:22.285 "name": "nvmf_tgt_poll_group_0", 00:13:22.285 "admin_qpairs": 2, 00:13:22.285 "io_qpairs": 84, 00:13:22.285 "current_admin_qpairs": 0, 00:13:22.285 "current_io_qpairs": 0, 00:13:22.285 "pending_bdev_io": 0, 00:13:22.285 "completed_nvme_io": 134, 00:13:22.285 "transports": [ 00:13:22.285 { 00:13:22.285 "trtype": "TCP" 00:13:22.285 } 00:13:22.285 ] 00:13:22.286 }, 00:13:22.286 { 00:13:22.286 "name": "nvmf_tgt_poll_group_1", 00:13:22.286 "admin_qpairs": 2, 00:13:22.286 "io_qpairs": 84, 00:13:22.286 "current_admin_qpairs": 0, 00:13:22.286 "current_io_qpairs": 0, 00:13:22.286 "pending_bdev_io": 0, 00:13:22.286 "completed_nvme_io": 232, 00:13:22.286 "transports": [ 00:13:22.286 { 00:13:22.286 "trtype": "TCP" 00:13:22.286 } 00:13:22.286 ] 00:13:22.286 }, 00:13:22.286 { 00:13:22.286 "name": "nvmf_tgt_poll_group_2", 00:13:22.286 "admin_qpairs": 1, 00:13:22.286 "io_qpairs": 84, 00:13:22.286 "current_admin_qpairs": 0, 00:13:22.286 "current_io_qpairs": 0, 00:13:22.286 "pending_bdev_io": 0, 00:13:22.286 "completed_nvme_io": 185, 00:13:22.286 "transports": [ 00:13:22.286 { 00:13:22.286 "trtype": "TCP" 00:13:22.286 } 00:13:22.286 ] 00:13:22.286 }, 00:13:22.286 { 00:13:22.286 "name": "nvmf_tgt_poll_group_3", 00:13:22.286 "admin_qpairs": 2, 00:13:22.286 "io_qpairs": 84, 00:13:22.286 "current_admin_qpairs": 0, 00:13:22.286 "current_io_qpairs": 0, 00:13:22.286 "pending_bdev_io": 0, 00:13:22.286 "completed_nvme_io": 135, 00:13:22.286 "transports": [ 00:13:22.286 { 00:13:22.286 "trtype": "TCP" 00:13:22.286 } 00:13:22.286 ] 00:13:22.286 } 00:13:22.286 ] 00:13:22.286 }' 00:13:22.286 17:05:38 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.286 17:05:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.286 17:05:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.286 17:05:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.286 17:05:38 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:22.286 17:05:38 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.286 17:05:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.286 17:05:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.286 17:05:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.286 17:05:38 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:22.286 17:05:38 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:22.286 17:05:38 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:22.286 17:05:38 -- target/rpc.sh@123 -- # nvmftestfini 00:13:22.286 17:05:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:22.286 17:05:38 -- nvmf/common.sh@116 -- # sync 00:13:22.286 17:05:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:22.286 17:05:38 -- nvmf/common.sh@119 -- # set +e 00:13:22.286 17:05:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:22.286 17:05:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:22.286 rmmod nvme_tcp 00:13:22.286 rmmod nvme_fabrics 00:13:22.286 rmmod nvme_keyring 00:13:22.286 17:05:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:22.286 17:05:38 -- nvmf/common.sh@123 -- # set -e 00:13:22.286 17:05:38 -- nvmf/common.sh@124 -- # return 0 00:13:22.286 17:05:38 -- nvmf/common.sh@477 -- # '[' -n 484209 ']' 00:13:22.286 17:05:38 -- nvmf/common.sh@478 -- # killprocess 484209 00:13:22.286 17:05:38 -- common/autotest_common.sh@926 -- # '[' -z 484209 ']' 00:13:22.286 17:05:38 -- common/autotest_common.sh@930 -- # kill -0 484209 00:13:22.286 17:05:38 -- common/autotest_common.sh@931 -- # uname 00:13:22.286 17:05:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:22.286 17:05:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 484209 00:13:22.286 17:05:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:22.286 17:05:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:22.286 17:05:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 484209' 00:13:22.286 killing process with pid 484209 00:13:22.286 17:05:38 -- common/autotest_common.sh@945 -- # kill 484209 00:13:22.286 17:05:38 -- common/autotest_common.sh@950 -- # wait 484209 00:13:22.544 17:05:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:22.544 17:05:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:22.544 17:05:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:22.544 17:05:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:22.544 17:05:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:22.544 17:05:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.544 17:05:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.544 17:05:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.094 17:05:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:25.094 00:13:25.094 real 0m25.184s 00:13:25.094 user 1m22.373s 00:13:25.094 sys 0m3.880s 00:13:25.094 17:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.094 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:13:25.094 ************************************ 00:13:25.094 END TEST nvmf_rpc 00:13:25.094 ************************************ 00:13:25.094 17:05:40 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.094 17:05:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:25.094 17:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.094 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:13:25.094 ************************************ 00:13:25.094 START TEST nvmf_invalid 00:13:25.094 ************************************ 00:13:25.094 17:05:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:25.094 * Looking for test storage... 00:13:25.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:25.094 17:05:40 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:25.094 17:05:40 -- nvmf/common.sh@7 -- # uname -s 00:13:25.094 17:05:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.094 17:05:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.094 17:05:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.094 17:05:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.094 17:05:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.094 17:05:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.094 17:05:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.094 17:05:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.094 17:05:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.094 17:05:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.094 17:05:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.094 17:05:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:25.094 17:05:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.094 17:05:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.094 17:05:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:25.094 17:05:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:25.094 17:05:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.094 17:05:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.094 17:05:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.094 17:05:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.094 17:05:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.094 17:05:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.094 17:05:40 -- paths/export.sh@5 -- # export PATH 00:13:25.094 17:05:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.094 17:05:40 -- nvmf/common.sh@46 -- # : 0 00:13:25.094 17:05:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:25.094 17:05:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:25.094 17:05:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:25.094 17:05:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.094 17:05:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.094 17:05:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:25.094 17:05:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:25.094 17:05:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:25.094 17:05:40 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:25.094 17:05:40 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:25.094 17:05:40 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:25.094 17:05:40 -- target/invalid.sh@14 -- # target=foobar 00:13:25.094 17:05:40 -- target/invalid.sh@16 -- # RANDOM=0 00:13:25.094 17:05:40 -- target/invalid.sh@34 -- # nvmftestinit 00:13:25.094 17:05:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:25.094 17:05:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.094 17:05:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:25.094 17:05:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:25.094 17:05:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:25.094 17:05:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.094 17:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.094 17:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.094 17:05:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:25.094 17:05:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:25.094 17:05:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:25.094 17:05:40 -- common/autotest_common.sh@10 -- # set +x 00:13:26.996 17:05:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:26.996 17:05:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:26.996 17:05:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:26.996 17:05:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:26.996 17:05:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:26.996 17:05:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:26.996 17:05:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:26.996 17:05:42 -- nvmf/common.sh@294 -- # net_devs=() 00:13:26.996 17:05:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:26.996 17:05:42 -- nvmf/common.sh@295 -- # e810=() 00:13:26.996 17:05:42 -- nvmf/common.sh@295 -- # local -ga e810 00:13:26.996 17:05:42 -- nvmf/common.sh@296 -- # x722=() 00:13:26.996 17:05:42 -- nvmf/common.sh@296 -- # local -ga x722 00:13:26.996 17:05:42 -- nvmf/common.sh@297 -- # mlx=() 00:13:26.996 17:05:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:26.996 17:05:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:26.996 17:05:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:26.996 17:05:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:26.996 17:05:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:26.996 17:05:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:26.996 17:05:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:26.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:26.996 17:05:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:26.996 17:05:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:26.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:26.996 17:05:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:26.996 17:05:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:26.996 17:05:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.996 17:05:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:26.996 17:05:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.996 17:05:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:26.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:26.996 17:05:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.996 17:05:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:26.996 17:05:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:26.996 17:05:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:26.996 17:05:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:26.996 17:05:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:26.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:26.996 17:05:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:26.996 17:05:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:26.996 17:05:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:26.996 17:05:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:26.996 17:05:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:26.996 17:05:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.996 17:05:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.996 17:05:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:26.996 17:05:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:26.996 17:05:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:26.996 17:05:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:26.996 17:05:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:26.996 17:05:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:26.996 17:05:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.996 17:05:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:26.996 17:05:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:26.996 17:05:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:26.996 17:05:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:26.996 17:05:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:26.996 17:05:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:26.996 17:05:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:26.996 17:05:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:26.996 17:05:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:26.996 17:05:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:26.996 17:05:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:26.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:13:26.996 00:13:26.996 --- 10.0.0.2 ping statistics --- 00:13:26.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.996 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:13:26.996 17:05:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:26.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:13:26.996 00:13:26.996 --- 10.0.0.1 ping statistics --- 00:13:26.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.996 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:13:26.996 17:05:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.996 17:05:43 -- nvmf/common.sh@410 -- # return 0 00:13:26.996 17:05:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:26.996 17:05:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.996 17:05:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:26.996 17:05:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:26.996 17:05:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.996 17:05:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:26.996 17:05:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:26.996 17:05:43 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:26.996 17:05:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:26.996 17:05:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:26.996 17:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:26.996 17:05:43 -- nvmf/common.sh@469 -- # nvmfpid=488803 00:13:26.996 17:05:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:26.996 17:05:43 -- nvmf/common.sh@470 -- # waitforlisten 488803 00:13:26.996 17:05:43 -- common/autotest_common.sh@819 -- # '[' -z 488803 ']' 00:13:26.996 17:05:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.996 17:05:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:26.996 17:05:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.996 17:05:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:26.996 17:05:43 -- common/autotest_common.sh@10 -- # set +x 00:13:26.996 [2024-07-20 17:05:43.105614] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:26.996 [2024-07-20 17:05:43.105692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.996 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.264 [2024-07-20 17:05:43.176087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.264 [2024-07-20 17:05:43.268772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:27.264 [2024-07-20 17:05:43.268969] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.264 [2024-07-20 17:05:43.268991] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.264 [2024-07-20 17:05:43.269007] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.264 [2024-07-20 17:05:43.269067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.264 [2024-07-20 17:05:43.269132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.264 [2024-07-20 17:05:43.269183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.264 [2024-07-20 17:05:43.269186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.194 17:05:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.194 17:05:44 -- common/autotest_common.sh@852 -- # return 0 00:13:28.194 17:05:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:28.194 17:05:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:28.194 17:05:44 -- common/autotest_common.sh@10 -- # set +x 00:13:28.194 17:05:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.194 17:05:44 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:28.194 17:05:44 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode26132 00:13:28.194 [2024-07-20 17:05:44.328247] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:28.194 17:05:44 -- target/invalid.sh@40 -- # out='request: 00:13:28.194 { 00:13:28.194 "nqn": "nqn.2016-06.io.spdk:cnode26132", 00:13:28.194 "tgt_name": "foobar", 00:13:28.194 "method": "nvmf_create_subsystem", 00:13:28.194 "req_id": 1 00:13:28.194 } 00:13:28.194 Got JSON-RPC error response 00:13:28.194 response: 00:13:28.194 { 00:13:28.194 "code": -32603, 00:13:28.194 "message": "Unable to find target foobar" 00:13:28.194 }' 00:13:28.194 17:05:44 -- target/invalid.sh@41 -- # [[ request: 00:13:28.194 { 00:13:28.194 "nqn": "nqn.2016-06.io.spdk:cnode26132", 00:13:28.194 "tgt_name": "foobar", 00:13:28.194 "method": "nvmf_create_subsystem", 00:13:28.194 "req_id": 1 00:13:28.194 } 00:13:28.194 Got JSON-RPC error response 00:13:28.194 response: 00:13:28.194 { 00:13:28.194 "code": -32603, 00:13:28.194 "message": "Unable to find target foobar" 00:13:28.194 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:28.194 17:05:44 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:28.452 17:05:44 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18505 00:13:28.452 [2024-07-20 17:05:44.573081] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18505: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:28.452 17:05:44 -- target/invalid.sh@45 -- # out='request: 00:13:28.452 { 00:13:28.452 "nqn": "nqn.2016-06.io.spdk:cnode18505", 00:13:28.452 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.452 "method": "nvmf_create_subsystem", 00:13:28.452 "req_id": 1 00:13:28.452 } 00:13:28.452 Got JSON-RPC error response 00:13:28.452 response: 00:13:28.452 { 00:13:28.452 "code": -32602, 00:13:28.452 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.452 }' 00:13:28.452 17:05:44 -- target/invalid.sh@46 -- # [[ request: 00:13:28.452 { 00:13:28.452 "nqn": "nqn.2016-06.io.spdk:cnode18505", 00:13:28.452 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:28.452 "method": "nvmf_create_subsystem", 00:13:28.452 "req_id": 1 00:13:28.452 } 00:13:28.452 Got JSON-RPC error response 00:13:28.452 response: 00:13:28.452 { 00:13:28.452 "code": -32602, 00:13:28.452 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:28.452 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.452 17:05:44 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:28.452 17:05:44 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2009 00:13:28.710 [2024-07-20 17:05:44.825905] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2009: invalid model number 'SPDK_Controller' 00:13:28.710 17:05:44 -- target/invalid.sh@50 -- # out='request: 00:13:28.710 { 00:13:28.710 "nqn": "nqn.2016-06.io.spdk:cnode2009", 00:13:28.710 "model_number": "SPDK_Controller\u001f", 00:13:28.710 "method": "nvmf_create_subsystem", 00:13:28.710 "req_id": 1 00:13:28.710 } 00:13:28.710 Got JSON-RPC error response 00:13:28.710 response: 00:13:28.710 { 00:13:28.710 "code": -32602, 00:13:28.710 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.710 }' 00:13:28.710 17:05:44 -- target/invalid.sh@51 -- # [[ request: 00:13:28.710 { 00:13:28.710 "nqn": "nqn.2016-06.io.spdk:cnode2009", 00:13:28.710 "model_number": "SPDK_Controller\u001f", 00:13:28.710 "method": "nvmf_create_subsystem", 00:13:28.710 "req_id": 1 00:13:28.710 } 00:13:28.710 Got JSON-RPC error response 00:13:28.710 response: 00:13:28.710 { 00:13:28.710 "code": -32602, 00:13:28.710 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.710 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:28.710 17:05:44 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:28.710 17:05:44 -- target/invalid.sh@19 -- # local length=21 ll 00:13:28.710 17:05:44 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.710 17:05:44 -- target/invalid.sh@21 -- # local chars 00:13:28.710 17:05:44 -- target/invalid.sh@22 -- # local string 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # printf %x 76 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # string+=L 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # printf %x 35 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # string+='#' 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # printf %x 47 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # string+=/ 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # printf %x 93 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # string+=']' 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # printf %x 82 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:28.710 17:05:44 -- target/invalid.sh@25 -- # string+=R 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.710 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 51 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=3 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 34 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+='"' 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 82 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=R 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 46 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=. 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 100 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=d 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 122 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=z 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 87 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=W 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 78 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=N 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 85 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=U 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 40 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+='(' 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 116 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=t 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 112 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=p 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 49 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=1 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 69 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=E 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 112 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=p 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # printf %x 101 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:28.968 17:05:44 -- target/invalid.sh@25 -- # string+=e 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.968 17:05:44 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.968 17:05:44 -- target/invalid.sh@28 -- # [[ L == \- ]] 00:13:28.968 17:05:44 -- target/invalid.sh@31 -- # echo 'L#/]R3"R.dzWNU(tp1Epe' 00:13:28.968 17:05:44 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'L#/]R3"R.dzWNU(tp1Epe' nqn.2016-06.io.spdk:cnode3320 00:13:29.233 [2024-07-20 17:05:45.138964] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3320: invalid serial number 'L#/]R3"R.dzWNU(tp1Epe' 00:13:29.234 17:05:45 -- target/invalid.sh@54 -- # out='request: 00:13:29.234 { 00:13:29.234 "nqn": "nqn.2016-06.io.spdk:cnode3320", 00:13:29.234 "serial_number": "L#/]R3\"R.dzWNU(tp1Epe", 00:13:29.234 "method": "nvmf_create_subsystem", 00:13:29.234 "req_id": 1 00:13:29.234 } 00:13:29.234 Got JSON-RPC error response 00:13:29.234 response: 00:13:29.234 { 00:13:29.234 "code": -32602, 00:13:29.234 "message": "Invalid SN L#/]R3\"R.dzWNU(tp1Epe" 00:13:29.234 }' 00:13:29.234 17:05:45 -- target/invalid.sh@55 -- # [[ request: 00:13:29.234 { 00:13:29.234 "nqn": "nqn.2016-06.io.spdk:cnode3320", 00:13:29.234 "serial_number": "L#/]R3\"R.dzWNU(tp1Epe", 00:13:29.234 "method": "nvmf_create_subsystem", 00:13:29.234 "req_id": 1 00:13:29.234 } 00:13:29.234 Got JSON-RPC error response 00:13:29.234 response: 00:13:29.234 { 00:13:29.234 "code": -32602, 00:13:29.234 "message": "Invalid SN L#/]R3\"R.dzWNU(tp1Epe" 00:13:29.234 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:29.234 17:05:45 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:29.234 17:05:45 -- target/invalid.sh@19 -- # local length=41 ll 00:13:29.234 17:05:45 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:29.234 17:05:45 -- target/invalid.sh@21 -- # local chars 00:13:29.234 17:05:45 -- target/invalid.sh@22 -- # local string 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 87 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=W 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 61 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+== 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 111 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=o 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 114 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=r 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 116 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=t 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 105 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=i 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 107 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=k 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 58 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=: 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 124 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+='|' 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 60 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+='<' 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 105 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=i 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 74 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=J 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 106 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=j 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 74 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=J 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 88 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=X 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 87 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=W 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 53 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=5 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 51 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=3 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 114 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=r 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 124 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+='|' 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 66 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=B 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 66 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=B 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 40 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+='(' 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 106 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=j 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 69 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=E 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 82 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=R 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 80 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # string+=P 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.234 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.234 17:05:45 -- target/invalid.sh@25 -- # printf %x 124 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+='|' 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 54 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=6 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 97 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=a 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 65 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=A 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 52 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=4 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 38 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+='&' 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 86 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=V 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 34 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+='"' 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 48 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=0 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 72 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=H 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 35 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+='#' 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 43 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=+ 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 66 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=B 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # printf %x 93 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:29.235 17:05:45 -- target/invalid.sh@25 -- # string+=']' 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:29.235 17:05:45 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:29.235 17:05:45 -- target/invalid.sh@28 -- # [[ W == \- ]] 00:13:29.235 17:05:45 -- target/invalid.sh@31 -- # echo 'W=ortik:| /dev/null' 00:13:31.842 17:05:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.377 17:05:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:34.377 00:13:34.377 real 0m9.246s 00:13:34.377 user 0m22.438s 00:13:34.377 sys 0m2.515s 00:13:34.377 17:05:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.377 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:13:34.377 ************************************ 00:13:34.377 END TEST nvmf_invalid 00:13:34.377 ************************************ 00:13:34.377 17:05:50 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:34.377 17:05:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:34.377 17:05:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.377 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:13:34.377 ************************************ 00:13:34.377 START TEST nvmf_abort 00:13:34.377 ************************************ 00:13:34.377 17:05:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:34.377 * Looking for test storage... 00:13:34.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:34.377 17:05:50 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:34.377 17:05:50 -- nvmf/common.sh@7 -- # uname -s 00:13:34.377 17:05:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.377 17:05:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.377 17:05:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.377 17:05:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.377 17:05:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.377 17:05:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.377 17:05:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.377 17:05:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.377 17:05:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.377 17:05:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.378 17:05:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.378 17:05:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:34.378 17:05:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.378 17:05:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.378 17:05:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.378 17:05:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:34.378 17:05:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.378 17:05:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.378 17:05:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.378 17:05:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.378 17:05:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.378 17:05:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.378 17:05:50 -- paths/export.sh@5 -- # export PATH 00:13:34.378 17:05:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.378 17:05:50 -- nvmf/common.sh@46 -- # : 0 00:13:34.378 17:05:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:34.378 17:05:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:34.378 17:05:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:34.378 17:05:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.378 17:05:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.378 17:05:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:34.378 17:05:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:34.378 17:05:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:34.378 17:05:50 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:34.378 17:05:50 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:34.378 17:05:50 -- target/abort.sh@14 -- # nvmftestinit 00:13:34.378 17:05:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:34.378 17:05:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.378 17:05:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:34.378 17:05:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:34.378 17:05:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:34.378 17:05:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.378 17:05:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.378 17:05:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.378 17:05:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:34.378 17:05:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:34.378 17:05:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:34.378 17:05:50 -- common/autotest_common.sh@10 -- # set +x 00:13:36.280 17:05:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:36.280 17:05:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:36.280 17:05:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:36.280 17:05:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:36.280 17:05:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:36.280 17:05:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:36.280 17:05:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:36.280 17:05:51 -- nvmf/common.sh@294 -- # net_devs=() 00:13:36.280 17:05:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:36.280 17:05:51 -- nvmf/common.sh@295 -- # e810=() 00:13:36.280 17:05:51 -- nvmf/common.sh@295 -- # local -ga e810 00:13:36.280 17:05:51 -- nvmf/common.sh@296 -- # x722=() 00:13:36.280 17:05:51 -- nvmf/common.sh@296 -- # local -ga x722 00:13:36.280 17:05:51 -- nvmf/common.sh@297 -- # mlx=() 00:13:36.280 17:05:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:36.280 17:05:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:36.280 17:05:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:36.280 17:05:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:36.280 17:05:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:36.280 17:05:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:36.280 17:05:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:36.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:36.280 17:05:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:36.280 17:05:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:36.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:36.280 17:05:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:36.280 17:05:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:36.280 17:05:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:36.280 17:05:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.280 17:05:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:36.281 17:05:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.281 17:05:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:36.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:36.281 17:05:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.281 17:05:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:36.281 17:05:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:36.281 17:05:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:36.281 17:05:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:36.281 17:05:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:36.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:36.281 17:05:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:36.281 17:05:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:36.281 17:05:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:36.281 17:05:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:36.281 17:05:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:36.281 17:05:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:36.281 17:05:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:36.281 17:05:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:36.281 17:05:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:36.281 17:05:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:36.281 17:05:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:36.281 17:05:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:36.281 17:05:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:36.281 17:05:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:36.281 17:05:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:36.281 17:05:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:36.281 17:05:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:36.281 17:05:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:36.281 17:05:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:36.281 17:05:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:36.281 17:05:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:36.281 17:05:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:36.281 17:05:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:36.281 17:05:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:36.281 17:05:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:36.281 17:05:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:36.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:36.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:36.281 00:13:36.281 --- 10.0.0.2 ping statistics --- 00:13:36.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.281 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:36.281 17:05:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:36.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:36.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:13:36.281 00:13:36.281 --- 10.0.0.1 ping statistics --- 00:13:36.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:36.281 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:36.281 17:05:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:36.281 17:05:52 -- nvmf/common.sh@410 -- # return 0 00:13:36.281 17:05:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:36.281 17:05:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:36.281 17:05:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:36.281 17:05:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:36.281 17:05:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:36.281 17:05:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:36.281 17:05:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:36.281 17:05:52 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:36.281 17:05:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:36.281 17:05:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:36.281 17:05:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.281 17:05:52 -- nvmf/common.sh@469 -- # nvmfpid=491482 00:13:36.281 17:05:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:36.281 17:05:52 -- nvmf/common.sh@470 -- # waitforlisten 491482 00:13:36.281 17:05:52 -- common/autotest_common.sh@819 -- # '[' -z 491482 ']' 00:13:36.281 17:05:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.281 17:05:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:36.281 17:05:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.281 17:05:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:36.281 17:05:52 -- common/autotest_common.sh@10 -- # set +x 00:13:36.281 [2024-07-20 17:05:52.187762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:36.281 [2024-07-20 17:05:52.187891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.281 EAL: No free 2048 kB hugepages reported on node 1 00:13:36.281 [2024-07-20 17:05:52.254270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.281 [2024-07-20 17:05:52.341393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.281 [2024-07-20 17:05:52.341541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.281 [2024-07-20 17:05:52.341559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.281 [2024-07-20 17:05:52.341572] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.281 [2024-07-20 17:05:52.341659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.281 [2024-07-20 17:05:52.341703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.281 [2024-07-20 17:05:52.341706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.214 17:05:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:37.214 17:05:53 -- common/autotest_common.sh@852 -- # return 0 00:13:37.214 17:05:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:37.214 17:05:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:37.214 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.214 17:05:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.214 17:05:53 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:37.214 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.214 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 [2024-07-20 17:05:53.153201] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:37.215 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.215 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 Malloc0 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:37.215 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.215 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 Delay0 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:37.215 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.215 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:37.215 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.215 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:37.215 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.215 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 [2024-07-20 17:05:53.218880] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:37.215 17:05:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.215 17:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:37.215 17:05:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.215 17:05:53 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:37.215 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.215 [2024-07-20 17:05:53.325731] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:39.738 Initializing NVMe Controllers 00:13:39.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:39.738 controller IO queue size 128 less than required 00:13:39.738 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:39.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:39.738 Initialization complete. Launching workers. 00:13:39.738 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31459 00:13:39.738 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31520, failed to submit 62 00:13:39.738 success 31459, unsuccess 61, failed 0 00:13:39.738 17:05:55 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:39.738 17:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.738 17:05:55 -- common/autotest_common.sh@10 -- # set +x 00:13:39.738 17:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.738 17:05:55 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:39.738 17:05:55 -- target/abort.sh@38 -- # nvmftestfini 00:13:39.738 17:05:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.738 17:05:55 -- nvmf/common.sh@116 -- # sync 00:13:39.738 17:05:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:39.738 17:05:55 -- nvmf/common.sh@119 -- # set +e 00:13:39.738 17:05:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:39.738 17:05:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:39.738 rmmod nvme_tcp 00:13:39.738 rmmod nvme_fabrics 00:13:39.738 rmmod nvme_keyring 00:13:39.738 17:05:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:39.738 17:05:55 -- nvmf/common.sh@123 -- # set -e 00:13:39.738 17:05:55 -- nvmf/common.sh@124 -- # return 0 00:13:39.738 17:05:55 -- nvmf/common.sh@477 -- # '[' -n 491482 ']' 00:13:39.738 17:05:55 -- nvmf/common.sh@478 -- # killprocess 491482 00:13:39.738 17:05:55 -- common/autotest_common.sh@926 -- # '[' -z 491482 ']' 00:13:39.738 17:05:55 -- common/autotest_common.sh@930 -- # kill -0 491482 00:13:39.738 17:05:55 -- common/autotest_common.sh@931 -- # uname 00:13:39.738 17:05:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:39.738 17:05:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 491482 00:13:39.738 17:05:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:39.738 17:05:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:39.738 17:05:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 491482' 00:13:39.738 killing process with pid 491482 00:13:39.738 17:05:55 -- common/autotest_common.sh@945 -- # kill 491482 00:13:39.738 17:05:55 -- common/autotest_common.sh@950 -- # wait 491482 00:13:39.738 17:05:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:39.738 17:05:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:39.738 17:05:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:39.738 17:05:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.738 17:05:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:39.738 17:05:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.738 17:05:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.738 17:05:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.639 17:05:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:41.639 00:13:41.639 real 0m7.754s 00:13:41.639 user 0m12.626s 00:13:41.639 sys 0m2.444s 00:13:41.639 17:05:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.639 17:05:57 -- common/autotest_common.sh@10 -- # set +x 00:13:41.639 ************************************ 00:13:41.639 END TEST nvmf_abort 00:13:41.639 ************************************ 00:13:41.897 17:05:57 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:41.897 17:05:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:41.897 17:05:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.897 17:05:57 -- common/autotest_common.sh@10 -- # set +x 00:13:41.897 ************************************ 00:13:41.897 START TEST nvmf_ns_hotplug_stress 00:13:41.897 ************************************ 00:13:41.897 17:05:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:41.897 * Looking for test storage... 00:13:41.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.897 17:05:57 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.897 17:05:57 -- nvmf/common.sh@7 -- # uname -s 00:13:41.897 17:05:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.897 17:05:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.897 17:05:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.897 17:05:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.897 17:05:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.897 17:05:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.897 17:05:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.897 17:05:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.897 17:05:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.897 17:05:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.897 17:05:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.897 17:05:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.897 17:05:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.897 17:05:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.897 17:05:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.897 17:05:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.897 17:05:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.897 17:05:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.897 17:05:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.897 17:05:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.897 17:05:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.897 17:05:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.897 17:05:57 -- paths/export.sh@5 -- # export PATH 00:13:41.897 17:05:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.897 17:05:57 -- nvmf/common.sh@46 -- # : 0 00:13:41.897 17:05:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:41.897 17:05:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:41.897 17:05:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:41.897 17:05:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.897 17:05:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.897 17:05:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:41.897 17:05:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:41.897 17:05:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:41.897 17:05:57 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:41.897 17:05:57 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:41.897 17:05:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:41.897 17:05:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.897 17:05:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:41.897 17:05:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:41.897 17:05:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:41.897 17:05:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.897 17:05:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.897 17:05:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.897 17:05:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:41.897 17:05:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:41.897 17:05:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:41.897 17:05:57 -- common/autotest_common.sh@10 -- # set +x 00:13:43.798 17:05:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.798 17:05:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:43.798 17:05:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:43.798 17:05:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:43.798 17:05:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:43.798 17:05:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:43.798 17:05:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:43.798 17:05:59 -- nvmf/common.sh@294 -- # net_devs=() 00:13:43.798 17:05:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:43.798 17:05:59 -- nvmf/common.sh@295 -- # e810=() 00:13:43.798 17:05:59 -- nvmf/common.sh@295 -- # local -ga e810 00:13:43.798 17:05:59 -- nvmf/common.sh@296 -- # x722=() 00:13:43.798 17:05:59 -- nvmf/common.sh@296 -- # local -ga x722 00:13:43.798 17:05:59 -- nvmf/common.sh@297 -- # mlx=() 00:13:43.798 17:05:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:43.798 17:05:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.798 17:05:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:43.798 17:05:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:43.798 17:05:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:43.798 17:05:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:43.798 17:05:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.798 17:05:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:43.798 17:05:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.798 17:05:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:43.798 17:05:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:43.798 17:05:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.798 17:05:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:43.798 17:05:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.798 17:05:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.798 17:05:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.798 17:05:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:43.798 17:05:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.798 17:05:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:43.798 17:05:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.798 17:05:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.798 17:05:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.798 17:05:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:43.798 17:05:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:43.798 17:05:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:43.798 17:05:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:43.798 17:05:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.798 17:05:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.798 17:05:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.798 17:05:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:43.798 17:05:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.798 17:05:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.798 17:05:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:43.798 17:05:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.798 17:05:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.798 17:05:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:43.798 17:05:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:43.798 17:05:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.798 17:05:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.798 17:05:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.798 17:05:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.798 17:05:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:43.798 17:05:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.057 17:05:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.057 17:05:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.057 17:05:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:44.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:13:44.057 00:13:44.057 --- 10.0.0.2 ping statistics --- 00:13:44.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.057 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:44.057 17:05:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:13:44.057 00:13:44.057 --- 10.0.0.1 ping statistics --- 00:13:44.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.057 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:13:44.057 17:06:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.057 17:06:00 -- nvmf/common.sh@410 -- # return 0 00:13:44.057 17:06:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:44.057 17:06:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.057 17:06:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:44.057 17:06:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:44.057 17:06:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.057 17:06:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:44.057 17:06:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:44.057 17:06:00 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:44.057 17:06:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:44.057 17:06:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:44.057 17:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.057 17:06:00 -- nvmf/common.sh@469 -- # nvmfpid=493871 00:13:44.057 17:06:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:44.057 17:06:00 -- nvmf/common.sh@470 -- # waitforlisten 493871 00:13:44.057 17:06:00 -- common/autotest_common.sh@819 -- # '[' -z 493871 ']' 00:13:44.057 17:06:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.057 17:06:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.057 17:06:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.057 17:06:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.057 17:06:00 -- common/autotest_common.sh@10 -- # set +x 00:13:44.057 [2024-07-20 17:06:00.067761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:44.057 [2024-07-20 17:06:00.067875] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.057 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.057 [2024-07-20 17:06:00.134603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.315 [2024-07-20 17:06:00.221906] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.315 [2024-07-20 17:06:00.222069] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.315 [2024-07-20 17:06:00.222087] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.315 [2024-07-20 17:06:00.222109] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.315 [2024-07-20 17:06:00.222179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.315 [2024-07-20 17:06:00.222206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.315 [2024-07-20 17:06:00.222209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.880 17:06:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:44.880 17:06:01 -- common/autotest_common.sh@852 -- # return 0 00:13:44.880 17:06:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:44.880 17:06:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:44.880 17:06:01 -- common/autotest_common.sh@10 -- # set +x 00:13:44.880 17:06:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.880 17:06:01 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:44.880 17:06:01 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:45.138 [2024-07-20 17:06:01.254982] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.138 17:06:01 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:45.394 17:06:01 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.650 [2024-07-20 17:06:01.753597] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.651 17:06:01 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.907 17:06:02 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:46.165 Malloc0 00:13:46.165 17:06:02 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:46.422 Delay0 00:13:46.422 17:06:02 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.680 17:06:02 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:46.937 NULL1 00:13:46.937 17:06:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:47.193 17:06:03 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=494410 00:13:47.193 17:06:03 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:47.193 17:06:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:47.193 17:06:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.193 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.449 17:06:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.706 17:06:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:47.706 17:06:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:47.963 true 00:13:47.963 17:06:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:47.963 17:06:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.221 17:06:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.478 17:06:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:48.478 17:06:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:48.735 true 00:13:48.735 17:06:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:48.735 17:06:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.667 Read completed with error (sct=0, sc=11) 00:13:49.667 17:06:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.667 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:49.667 17:06:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:49.667 17:06:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:49.924 true 00:13:49.924 17:06:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:49.924 17:06:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.180 17:06:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.437 17:06:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:50.437 17:06:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:50.694 true 00:13:50.694 17:06:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:50.694 17:06:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.626 17:06:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.882 17:06:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:51.882 17:06:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:52.140 true 00:13:52.140 17:06:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:52.140 17:06:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.398 17:06:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.655 17:06:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:52.655 17:06:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:52.912 true 00:13:52.912 17:06:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:52.912 17:06:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.844 17:06:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.844 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:54.101 17:06:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:54.101 17:06:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:54.358 true 00:13:54.358 17:06:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:54.358 17:06:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.616 17:06:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.873 17:06:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:54.873 17:06:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:55.130 true 00:13:55.130 17:06:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:55.130 17:06:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.073 17:06:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.073 17:06:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:56.073 17:06:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:56.330 true 00:13:56.330 17:06:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:56.330 17:06:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.587 17:06:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.849 17:06:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:56.849 17:06:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:57.164 true 00:13:57.164 17:06:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:57.164 17:06:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.093 17:06:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.349 17:06:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:58.349 17:06:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:58.620 true 00:13:58.620 17:06:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:58.620 17:06:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.878 17:06:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.135 17:06:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:59.135 17:06:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:59.391 true 00:13:59.391 17:06:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:13:59.391 17:06:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.322 17:06:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.579 17:06:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:00.579 17:06:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:00.579 true 00:14:00.837 17:06:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:00.837 17:06:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.837 17:06:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.094 17:06:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:01.094 17:06:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:01.352 true 00:14:01.352 17:06:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:01.352 17:06:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.285 17:06:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.542 17:06:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:02.542 17:06:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:02.800 true 00:14:02.800 17:06:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:02.800 17:06:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.058 17:06:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.315 17:06:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:03.315 17:06:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:03.573 true 00:14:03.573 17:06:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:03.573 17:06:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.507 17:06:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.764 17:06:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:04.764 17:06:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:05.021 true 00:14:05.021 17:06:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:05.021 17:06:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.279 17:06:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.536 17:06:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:05.536 17:06:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:05.794 true 00:14:05.794 17:06:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:05.794 17:06:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.726 17:06:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.726 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:06.993 17:06:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:06.993 17:06:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:06.993 true 00:14:07.252 17:06:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:07.252 17:06:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.252 17:06:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.508 17:06:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:07.508 17:06:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:07.764 true 00:14:07.764 17:06:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:07.764 17:06:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.691 17:06:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.948 17:06:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:08.948 17:06:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:09.204 true 00:14:09.204 17:06:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:09.204 17:06:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.468 17:06:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.746 17:06:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:09.746 17:06:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:10.003 true 00:14:10.003 17:06:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:10.003 17:06:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.934 17:06:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.192 17:06:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:11.192 17:06:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:11.192 true 00:14:11.192 17:06:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:11.192 17:06:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.449 17:06:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.707 17:06:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:11.707 17:06:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:11.964 true 00:14:11.964 17:06:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:11.964 17:06:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:12.894 17:06:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.149 17:06:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:13.149 17:06:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:13.406 true 00:14:13.406 17:06:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:13.406 17:06:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.663 17:06:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.920 17:06:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:13.920 17:06:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:14.177 true 00:14:14.177 17:06:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:14.177 17:06:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.106 17:06:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.106 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.106 17:06:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:15.106 17:06:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:15.363 true 00:14:15.363 17:06:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:15.363 17:06:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.620 17:06:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.877 17:06:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:15.877 17:06:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:16.133 true 00:14:16.133 17:06:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:16.133 17:06:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.064 17:06:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:17.322 17:06:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:17.322 17:06:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:17.579 true 00:14:17.579 17:06:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:17.579 17:06:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.579 Initializing NVMe Controllers 00:14:17.579 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.579 Controller IO queue size 128, less than required. 00:14:17.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.579 Controller IO queue size 128, less than required. 00:14:17.579 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:17.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.579 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:17.579 Initialization complete. Launching workers. 00:14:17.579 ======================================================== 00:14:17.579 Latency(us) 00:14:17.579 Device Information : IOPS MiB/s Average min max 00:14:17.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 510.74 0.25 129579.08 2108.15 1123854.22 00:14:17.579 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11507.03 5.62 11092.70 3222.35 441386.11 00:14:17.579 ======================================================== 00:14:17.579 Total : 12017.77 5.87 16128.21 2108.15 1123854.22 00:14:17.579 00:14:17.836 17:06:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.093 17:06:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:18.093 17:06:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:18.350 true 00:14:18.350 17:06:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 494410 00:14:18.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (494410) - No such process 00:14:18.350 17:06:34 -- target/ns_hotplug_stress.sh@53 -- # wait 494410 00:14:18.350 17:06:34 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.607 17:06:34 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:18.864 17:06:34 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:18.864 17:06:34 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:18.864 17:06:34 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:18.864 17:06:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:18.864 17:06:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:18.864 null0 00:14:19.121 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.121 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.121 17:06:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:19.121 null1 00:14:19.379 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.379 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.379 17:06:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:19.379 null2 00:14:19.379 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.379 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.379 17:06:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:19.636 null3 00:14:19.636 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.636 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.636 17:06:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:19.893 null4 00:14:19.893 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:19.893 17:06:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:19.893 17:06:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:20.150 null5 00:14:20.150 17:06:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.150 17:06:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.150 17:06:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:20.408 null6 00:14:20.408 17:06:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.408 17:06:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.408 17:06:36 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:20.666 null7 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.666 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@66 -- # wait 499082 499083 499085 499087 499089 499091 499093 499095 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:20.667 17:06:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:20.924 17:06:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.181 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.439 17:06:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:21.703 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:21.704 17:06:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:21.961 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.218 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.475 17:06:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.732 17:06:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:22.990 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.287 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.288 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.288 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.288 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.288 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.545 17:06:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.803 17:06:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.062 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.319 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.319 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.319 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.319 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.320 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.577 17:06:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.835 17:06:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.094 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.352 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.353 17:06:41 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.611 17:06:41 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:42 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.870 17:06:42 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.870 17:06:42 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:25.870 17:06:42 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:25.870 17:06:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:25.870 17:06:42 -- nvmf/common.sh@116 -- # sync 00:14:25.870 17:06:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:25.870 17:06:42 -- nvmf/common.sh@119 -- # set +e 00:14:25.870 17:06:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:25.870 17:06:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:25.870 rmmod nvme_tcp 00:14:25.870 rmmod nvme_fabrics 00:14:26.129 rmmod nvme_keyring 00:14:26.129 17:06:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:26.129 17:06:42 -- nvmf/common.sh@123 -- # set -e 00:14:26.129 17:06:42 -- nvmf/common.sh@124 -- # return 0 00:14:26.129 17:06:42 -- nvmf/common.sh@477 -- # '[' -n 493871 ']' 00:14:26.129 17:06:42 -- nvmf/common.sh@478 -- # killprocess 493871 00:14:26.129 17:06:42 -- common/autotest_common.sh@926 -- # '[' -z 493871 ']' 00:14:26.129 17:06:42 -- common/autotest_common.sh@930 -- # kill -0 493871 00:14:26.129 17:06:42 -- common/autotest_common.sh@931 -- # uname 00:14:26.129 17:06:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.129 17:06:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 493871 00:14:26.129 17:06:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:26.129 17:06:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:26.129 17:06:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 493871' 00:14:26.129 killing process with pid 493871 00:14:26.129 17:06:42 -- common/autotest_common.sh@945 -- # kill 493871 00:14:26.129 17:06:42 -- common/autotest_common.sh@950 -- # wait 493871 00:14:26.388 17:06:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:26.388 17:06:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:26.388 17:06:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:26.388 17:06:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.388 17:06:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:26.388 17:06:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.388 17:06:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.388 17:06:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.290 17:06:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:28.290 00:14:28.290 real 0m46.517s 00:14:28.290 user 3m28.230s 00:14:28.290 sys 0m16.624s 00:14:28.290 17:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.290 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:28.290 ************************************ 00:14:28.290 END TEST nvmf_ns_hotplug_stress 00:14:28.290 ************************************ 00:14:28.290 17:06:44 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:28.290 17:06:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:28.290 17:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.290 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:28.290 ************************************ 00:14:28.290 START TEST nvmf_connect_stress 00:14:28.290 ************************************ 00:14:28.290 17:06:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:28.290 * Looking for test storage... 00:14:28.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.290 17:06:44 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.290 17:06:44 -- nvmf/common.sh@7 -- # uname -s 00:14:28.290 17:06:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.290 17:06:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.290 17:06:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.290 17:06:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.290 17:06:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.290 17:06:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.290 17:06:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.290 17:06:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.290 17:06:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.290 17:06:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.290 17:06:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.290 17:06:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.290 17:06:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.290 17:06:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.290 17:06:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.290 17:06:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.290 17:06:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.290 17:06:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.290 17:06:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.290 17:06:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.290 17:06:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.290 17:06:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.290 17:06:44 -- paths/export.sh@5 -- # export PATH 00:14:28.290 17:06:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.290 17:06:44 -- nvmf/common.sh@46 -- # : 0 00:14:28.290 17:06:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.290 17:06:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.290 17:06:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.290 17:06:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.290 17:06:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.290 17:06:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.290 17:06:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.290 17:06:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.290 17:06:44 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:28.290 17:06:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.290 17:06:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.290 17:06:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.290 17:06:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.290 17:06:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.290 17:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.290 17:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.290 17:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.290 17:06:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:28.290 17:06:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:28.290 17:06:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:28.291 17:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 17:06:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:30.815 17:06:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:30.815 17:06:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:30.815 17:06:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:30.815 17:06:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:30.815 17:06:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:30.815 17:06:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:30.815 17:06:46 -- nvmf/common.sh@294 -- # net_devs=() 00:14:30.815 17:06:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:30.815 17:06:46 -- nvmf/common.sh@295 -- # e810=() 00:14:30.815 17:06:46 -- nvmf/common.sh@295 -- # local -ga e810 00:14:30.815 17:06:46 -- nvmf/common.sh@296 -- # x722=() 00:14:30.815 17:06:46 -- nvmf/common.sh@296 -- # local -ga x722 00:14:30.815 17:06:46 -- nvmf/common.sh@297 -- # mlx=() 00:14:30.815 17:06:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:30.815 17:06:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.815 17:06:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:30.815 17:06:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:30.815 17:06:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:30.815 17:06:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.815 17:06:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:30.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:30.815 17:06:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.815 17:06:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:30.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:30.815 17:06:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:30.815 17:06:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.815 17:06:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.815 17:06:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.815 17:06:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.815 17:06:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:30.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:30.815 17:06:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.815 17:06:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.815 17:06:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.815 17:06:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.815 17:06:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.815 17:06:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:30.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:30.815 17:06:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.815 17:06:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:30.815 17:06:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:30.815 17:06:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:30.815 17:06:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.815 17:06:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.815 17:06:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.815 17:06:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:30.815 17:06:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.815 17:06:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.815 17:06:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:30.815 17:06:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.815 17:06:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.815 17:06:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:30.815 17:06:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:30.815 17:06:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.815 17:06:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.815 17:06:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.815 17:06:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.815 17:06:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:30.815 17:06:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.815 17:06:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.815 17:06:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.815 17:06:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:30.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:14:30.815 00:14:30.815 --- 10.0.0.2 ping statistics --- 00:14:30.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.815 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:14:30.815 17:06:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:30.815 00:14:30.815 --- 10.0.0.1 ping statistics --- 00:14:30.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.815 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:30.815 17:06:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.815 17:06:46 -- nvmf/common.sh@410 -- # return 0 00:14:30.815 17:06:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.815 17:06:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.815 17:06:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.815 17:06:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.815 17:06:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.815 17:06:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.815 17:06:46 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:30.815 17:06:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.815 17:06:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:30.815 17:06:46 -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 17:06:46 -- nvmf/common.sh@469 -- # nvmfpid=501869 00:14:30.815 17:06:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:30.815 17:06:46 -- nvmf/common.sh@470 -- # waitforlisten 501869 00:14:30.815 17:06:46 -- common/autotest_common.sh@819 -- # '[' -z 501869 ']' 00:14:30.815 17:06:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.815 17:06:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.815 17:06:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.815 17:06:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.815 17:06:46 -- common/autotest_common.sh@10 -- # set +x 00:14:30.815 [2024-07-20 17:06:46.606577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:30.815 [2024-07-20 17:06:46.606653] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.815 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.815 [2024-07-20 17:06:46.671922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.815 [2024-07-20 17:06:46.758919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.815 [2024-07-20 17:06:46.759068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.815 [2024-07-20 17:06:46.759085] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.815 [2024-07-20 17:06:46.759097] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.815 [2024-07-20 17:06:46.759218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.815 [2024-07-20 17:06:46.759270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.815 [2024-07-20 17:06:46.759273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.748 17:06:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.748 17:06:47 -- common/autotest_common.sh@852 -- # return 0 00:14:31.748 17:06:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.748 17:06:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:31.748 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.748 17:06:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.748 17:06:47 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.748 17:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.748 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.748 [2024-07-20 17:06:47.574617] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.748 17:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.748 17:06:47 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.748 17:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.748 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.748 17:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.748 17:06:47 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.748 17:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.748 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.748 [2024-07-20 17:06:47.607945] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.748 17:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.748 17:06:47 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.748 17:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.748 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:31.748 NULL1 00:14:31.748 17:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.748 17:06:47 -- target/connect_stress.sh@21 -- # PERF_PID=501974 00:14:31.748 17:06:47 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:31.748 17:06:47 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.748 17:06:47 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.748 17:06:47 -- target/connect_stress.sh@28 -- # cat 00:14:31.748 17:06:47 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:31.748 17:06:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.748 17:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.748 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:32.006 17:06:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.006 17:06:47 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:32.006 17:06:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.006 17:06:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.006 17:06:47 -- common/autotest_common.sh@10 -- # set +x 00:14:32.264 17:06:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.264 17:06:48 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:32.264 17:06:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.264 17:06:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.264 17:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:32.521 17:06:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.521 17:06:48 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:32.521 17:06:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.521 17:06:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.521 17:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:33.089 17:06:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.089 17:06:48 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:33.089 17:06:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.089 17:06:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.089 17:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:33.346 17:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.346 17:06:49 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:33.346 17:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.346 17:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.346 17:06:49 -- common/autotest_common.sh@10 -- # set +x 00:14:33.604 17:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.604 17:06:49 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:33.604 17:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.604 17:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.604 17:06:49 -- common/autotest_common.sh@10 -- # set +x 00:14:33.861 17:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.861 17:06:49 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:33.861 17:06:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.861 17:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.861 17:06:49 -- common/autotest_common.sh@10 -- # set +x 00:14:34.118 17:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.118 17:06:50 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:34.118 17:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.118 17:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.118 17:06:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.684 17:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.684 17:06:50 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:34.684 17:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.684 17:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.684 17:06:50 -- common/autotest_common.sh@10 -- # set +x 00:14:34.944 17:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.944 17:06:50 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:34.944 17:06:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.944 17:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.944 17:06:50 -- common/autotest_common.sh@10 -- # set +x 00:14:35.201 17:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.201 17:06:51 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:35.201 17:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.201 17:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.201 17:06:51 -- common/autotest_common.sh@10 -- # set +x 00:14:35.459 17:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.459 17:06:51 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:35.459 17:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.459 17:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.459 17:06:51 -- common/autotest_common.sh@10 -- # set +x 00:14:35.716 17:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.716 17:06:51 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:35.716 17:06:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.716 17:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.716 17:06:51 -- common/autotest_common.sh@10 -- # set +x 00:14:36.282 17:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.282 17:06:52 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:36.282 17:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.282 17:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.282 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.540 17:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.540 17:06:52 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:36.540 17:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.540 17:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.540 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:14:36.798 17:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.798 17:06:52 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:36.798 17:06:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.798 17:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.798 17:06:52 -- common/autotest_common.sh@10 -- # set +x 00:14:37.054 17:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.054 17:06:53 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:37.054 17:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.054 17:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.054 17:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.310 17:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.310 17:06:53 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:37.310 17:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.310 17:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.310 17:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:37.873 17:06:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.873 17:06:53 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:37.873 17:06:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.873 17:06:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.873 17:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:38.129 17:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.129 17:06:54 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:38.130 17:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.130 17:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.130 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.386 17:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.386 17:06:54 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:38.386 17:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.386 17:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.386 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.643 17:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.643 17:06:54 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:38.643 17:06:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.643 17:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.643 17:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:38.900 17:06:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.900 17:06:55 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:38.900 17:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.900 17:06:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.900 17:06:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.464 17:06:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.464 17:06:55 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:39.464 17:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.464 17:06:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.464 17:06:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.721 17:06:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.721 17:06:55 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:39.721 17:06:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.721 17:06:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.721 17:06:55 -- common/autotest_common.sh@10 -- # set +x 00:14:39.979 17:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.979 17:06:56 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:39.979 17:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.979 17:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.979 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.235 17:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.235 17:06:56 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:40.235 17:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.235 17:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.235 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:40.798 17:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.798 17:06:56 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:40.798 17:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.798 17:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.798 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:41.055 17:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.055 17:06:56 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:41.055 17:06:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.055 17:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.055 17:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:41.311 17:06:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.311 17:06:57 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:41.311 17:06:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.311 17:06:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.311 17:06:57 -- common/autotest_common.sh@10 -- # set +x 00:14:41.568 17:06:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.568 17:06:57 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:41.568 17:06:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.568 17:06:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.568 17:06:57 -- common/autotest_common.sh@10 -- # set +x 00:14:41.824 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.824 17:06:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.824 17:06:57 -- target/connect_stress.sh@34 -- # kill -0 501974 00:14:41.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (501974) - No such process 00:14:41.824 17:06:57 -- target/connect_stress.sh@38 -- # wait 501974 00:14:41.824 17:06:57 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:41.824 17:06:57 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:41.824 17:06:57 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:41.824 17:06:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.824 17:06:57 -- nvmf/common.sh@116 -- # sync 00:14:41.824 17:06:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.824 17:06:57 -- nvmf/common.sh@119 -- # set +e 00:14:41.824 17:06:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.824 17:06:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.824 rmmod nvme_tcp 00:14:41.824 rmmod nvme_fabrics 00:14:41.824 rmmod nvme_keyring 00:14:42.081 17:06:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:42.081 17:06:57 -- nvmf/common.sh@123 -- # set -e 00:14:42.081 17:06:57 -- nvmf/common.sh@124 -- # return 0 00:14:42.081 17:06:57 -- nvmf/common.sh@477 -- # '[' -n 501869 ']' 00:14:42.081 17:06:57 -- nvmf/common.sh@478 -- # killprocess 501869 00:14:42.081 17:06:57 -- common/autotest_common.sh@926 -- # '[' -z 501869 ']' 00:14:42.081 17:06:57 -- common/autotest_common.sh@930 -- # kill -0 501869 00:14:42.081 17:06:57 -- common/autotest_common.sh@931 -- # uname 00:14:42.081 17:06:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:42.081 17:06:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 501869 00:14:42.081 17:06:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:42.081 17:06:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:42.081 17:06:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 501869' 00:14:42.081 killing process with pid 501869 00:14:42.081 17:06:58 -- common/autotest_common.sh@945 -- # kill 501869 00:14:42.081 17:06:58 -- common/autotest_common.sh@950 -- # wait 501869 00:14:42.338 17:06:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:42.338 17:06:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:42.338 17:06:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:42.338 17:06:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.338 17:06:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:42.338 17:06:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.338 17:06:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.338 17:06:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.243 17:07:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:44.243 00:14:44.243 real 0m15.940s 00:14:44.243 user 0m40.138s 00:14:44.243 sys 0m6.072s 00:14:44.243 17:07:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.243 17:07:00 -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 ************************************ 00:14:44.243 END TEST nvmf_connect_stress 00:14:44.243 ************************************ 00:14:44.243 17:07:00 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.243 17:07:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.243 17:07:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.243 17:07:00 -- common/autotest_common.sh@10 -- # set +x 00:14:44.243 ************************************ 00:14:44.243 START TEST nvmf_fused_ordering 00:14:44.243 ************************************ 00:14:44.243 17:07:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.243 * Looking for test storage... 00:14:44.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.243 17:07:00 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.243 17:07:00 -- nvmf/common.sh@7 -- # uname -s 00:14:44.243 17:07:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.243 17:07:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.243 17:07:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.243 17:07:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.243 17:07:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.243 17:07:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.243 17:07:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.243 17:07:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.243 17:07:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.243 17:07:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.243 17:07:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.243 17:07:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.243 17:07:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.243 17:07:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.243 17:07:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.243 17:07:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.243 17:07:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.243 17:07:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.243 17:07:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.243 17:07:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.244 17:07:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.244 17:07:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.244 17:07:00 -- paths/export.sh@5 -- # export PATH 00:14:44.244 17:07:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.244 17:07:00 -- nvmf/common.sh@46 -- # : 0 00:14:44.244 17:07:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:44.244 17:07:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:44.244 17:07:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:44.244 17:07:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.244 17:07:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.244 17:07:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:44.244 17:07:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:44.244 17:07:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:44.244 17:07:00 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:44.244 17:07:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:44.244 17:07:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.244 17:07:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:44.244 17:07:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:44.244 17:07:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:44.244 17:07:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.244 17:07:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.244 17:07:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.501 17:07:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:44.501 17:07:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:44.501 17:07:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:44.501 17:07:00 -- common/autotest_common.sh@10 -- # set +x 00:14:46.401 17:07:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:46.401 17:07:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:46.401 17:07:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:46.401 17:07:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:46.401 17:07:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:46.401 17:07:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:46.401 17:07:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:46.401 17:07:02 -- nvmf/common.sh@294 -- # net_devs=() 00:14:46.401 17:07:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:46.401 17:07:02 -- nvmf/common.sh@295 -- # e810=() 00:14:46.401 17:07:02 -- nvmf/common.sh@295 -- # local -ga e810 00:14:46.401 17:07:02 -- nvmf/common.sh@296 -- # x722=() 00:14:46.401 17:07:02 -- nvmf/common.sh@296 -- # local -ga x722 00:14:46.401 17:07:02 -- nvmf/common.sh@297 -- # mlx=() 00:14:46.401 17:07:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:46.401 17:07:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.401 17:07:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:46.401 17:07:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:46.401 17:07:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:46.401 17:07:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:46.401 17:07:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:46.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:46.401 17:07:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:46.401 17:07:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:46.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:46.401 17:07:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:46.401 17:07:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:46.401 17:07:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.401 17:07:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:46.401 17:07:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.401 17:07:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:46.401 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:46.401 17:07:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.401 17:07:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:46.401 17:07:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.401 17:07:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:46.401 17:07:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.401 17:07:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:46.401 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:46.401 17:07:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.401 17:07:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:46.401 17:07:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:46.401 17:07:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:46.401 17:07:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:46.401 17:07:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.401 17:07:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.401 17:07:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.401 17:07:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:46.401 17:07:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.401 17:07:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.401 17:07:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:46.401 17:07:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.401 17:07:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.401 17:07:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:46.401 17:07:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:46.401 17:07:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.401 17:07:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.401 17:07:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.402 17:07:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.402 17:07:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:46.402 17:07:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.402 17:07:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.402 17:07:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.402 17:07:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:46.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:46.402 00:14:46.402 --- 10.0.0.2 ping statistics --- 00:14:46.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.402 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:46.402 17:07:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:14:46.402 00:14:46.402 --- 10.0.0.1 ping statistics --- 00:14:46.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.402 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:46.402 17:07:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.402 17:07:02 -- nvmf/common.sh@410 -- # return 0 00:14:46.402 17:07:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:46.402 17:07:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.402 17:07:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:46.402 17:07:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:46.402 17:07:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.402 17:07:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:46.402 17:07:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:46.402 17:07:02 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:46.402 17:07:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:46.402 17:07:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:46.402 17:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:46.402 17:07:02 -- nvmf/common.sh@469 -- # nvmfpid=505208 00:14:46.402 17:07:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.402 17:07:02 -- nvmf/common.sh@470 -- # waitforlisten 505208 00:14:46.402 17:07:02 -- common/autotest_common.sh@819 -- # '[' -z 505208 ']' 00:14:46.402 17:07:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.402 17:07:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:46.402 17:07:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.402 17:07:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:46.402 17:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:46.402 [2024-07-20 17:07:02.401190] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:46.402 [2024-07-20 17:07:02.401265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.402 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.402 [2024-07-20 17:07:02.467552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.402 [2024-07-20 17:07:02.551920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:46.402 [2024-07-20 17:07:02.552075] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.402 [2024-07-20 17:07:02.552092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.402 [2024-07-20 17:07:02.552118] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.402 [2024-07-20 17:07:02.552154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.336 17:07:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.336 17:07:03 -- common/autotest_common.sh@852 -- # return 0 00:14:47.336 17:07:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:47.336 17:07:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:47.336 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.336 17:07:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.336 17:07:03 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.336 17:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.336 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.336 [2024-07-20 17:07:03.342541] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.336 17:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.337 17:07:03 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:47.337 17:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.337 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 17:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.337 17:07:03 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.337 17:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.337 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 [2024-07-20 17:07:03.358704] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.337 17:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.337 17:07:03 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:47.337 17:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.337 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 NULL1 00:14:47.337 17:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.337 17:07:03 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:47.337 17:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.337 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 17:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.337 17:07:03 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:47.337 17:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.337 17:07:03 -- common/autotest_common.sh@10 -- # set +x 00:14:47.337 17:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.337 17:07:03 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:47.337 [2024-07-20 17:07:03.402762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:47.337 [2024-07-20 17:07:03.402813] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid505274 ] 00:14:47.337 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.717 Attached to nqn.2016-06.io.spdk:cnode1 00:14:48.717 Namespace ID: 1 size: 1GB 00:14:48.717 fused_ordering(0) 00:14:48.717 fused_ordering(1) 00:14:48.717 fused_ordering(2) 00:14:48.717 fused_ordering(3) 00:14:48.717 fused_ordering(4) 00:14:48.717 fused_ordering(5) 00:14:48.717 fused_ordering(6) 00:14:48.717 fused_ordering(7) 00:14:48.717 fused_ordering(8) 00:14:48.717 fused_ordering(9) 00:14:48.717 fused_ordering(10) 00:14:48.717 fused_ordering(11) 00:14:48.717 fused_ordering(12) 00:14:48.717 fused_ordering(13) 00:14:48.717 fused_ordering(14) 00:14:48.717 fused_ordering(15) 00:14:48.717 fused_ordering(16) 00:14:48.717 fused_ordering(17) 00:14:48.717 fused_ordering(18) 00:14:48.717 fused_ordering(19) 00:14:48.717 fused_ordering(20) 00:14:48.717 fused_ordering(21) 00:14:48.717 fused_ordering(22) 00:14:48.717 fused_ordering(23) 00:14:48.717 fused_ordering(24) 00:14:48.717 fused_ordering(25) 00:14:48.717 fused_ordering(26) 00:14:48.717 fused_ordering(27) 00:14:48.717 fused_ordering(28) 00:14:48.717 fused_ordering(29) 00:14:48.717 fused_ordering(30) 00:14:48.717 fused_ordering(31) 00:14:48.717 fused_ordering(32) 00:14:48.717 fused_ordering(33) 00:14:48.717 fused_ordering(34) 00:14:48.717 fused_ordering(35) 00:14:48.717 fused_ordering(36) 00:14:48.717 fused_ordering(37) 00:14:48.717 fused_ordering(38) 00:14:48.717 fused_ordering(39) 00:14:48.717 fused_ordering(40) 00:14:48.717 fused_ordering(41) 00:14:48.717 fused_ordering(42) 00:14:48.717 fused_ordering(43) 00:14:48.717 fused_ordering(44) 00:14:48.717 fused_ordering(45) 00:14:48.717 fused_ordering(46) 00:14:48.717 fused_ordering(47) 00:14:48.717 fused_ordering(48) 00:14:48.717 fused_ordering(49) 00:14:48.717 fused_ordering(50) 00:14:48.717 fused_ordering(51) 00:14:48.717 fused_ordering(52) 00:14:48.717 fused_ordering(53) 00:14:48.717 fused_ordering(54) 00:14:48.717 fused_ordering(55) 00:14:48.717 fused_ordering(56) 00:14:48.717 fused_ordering(57) 00:14:48.717 fused_ordering(58) 00:14:48.717 fused_ordering(59) 00:14:48.717 fused_ordering(60) 00:14:48.717 fused_ordering(61) 00:14:48.717 fused_ordering(62) 00:14:48.717 fused_ordering(63) 00:14:48.717 fused_ordering(64) 00:14:48.717 fused_ordering(65) 00:14:48.717 fused_ordering(66) 00:14:48.717 fused_ordering(67) 00:14:48.717 fused_ordering(68) 00:14:48.717 fused_ordering(69) 00:14:48.717 fused_ordering(70) 00:14:48.717 fused_ordering(71) 00:14:48.717 fused_ordering(72) 00:14:48.717 fused_ordering(73) 00:14:48.717 fused_ordering(74) 00:14:48.717 fused_ordering(75) 00:14:48.717 fused_ordering(76) 00:14:48.717 fused_ordering(77) 00:14:48.717 fused_ordering(78) 00:14:48.717 fused_ordering(79) 00:14:48.717 fused_ordering(80) 00:14:48.717 fused_ordering(81) 00:14:48.717 fused_ordering(82) 00:14:48.717 fused_ordering(83) 00:14:48.717 fused_ordering(84) 00:14:48.717 fused_ordering(85) 00:14:48.717 fused_ordering(86) 00:14:48.717 fused_ordering(87) 00:14:48.717 fused_ordering(88) 00:14:48.717 fused_ordering(89) 00:14:48.717 fused_ordering(90) 00:14:48.717 fused_ordering(91) 00:14:48.717 fused_ordering(92) 00:14:48.717 fused_ordering(93) 00:14:48.717 fused_ordering(94) 00:14:48.717 fused_ordering(95) 00:14:48.717 fused_ordering(96) 00:14:48.717 fused_ordering(97) 00:14:48.717 fused_ordering(98) 00:14:48.717 fused_ordering(99) 00:14:48.717 fused_ordering(100) 00:14:48.717 fused_ordering(101) 00:14:48.717 fused_ordering(102) 00:14:48.717 fused_ordering(103) 00:14:48.717 fused_ordering(104) 00:14:48.717 fused_ordering(105) 00:14:48.717 fused_ordering(106) 00:14:48.717 fused_ordering(107) 00:14:48.717 fused_ordering(108) 00:14:48.717 fused_ordering(109) 00:14:48.717 fused_ordering(110) 00:14:48.717 fused_ordering(111) 00:14:48.717 fused_ordering(112) 00:14:48.717 fused_ordering(113) 00:14:48.717 fused_ordering(114) 00:14:48.717 fused_ordering(115) 00:14:48.717 fused_ordering(116) 00:14:48.717 fused_ordering(117) 00:14:48.717 fused_ordering(118) 00:14:48.717 fused_ordering(119) 00:14:48.717 fused_ordering(120) 00:14:48.717 fused_ordering(121) 00:14:48.717 fused_ordering(122) 00:14:48.717 fused_ordering(123) 00:14:48.717 fused_ordering(124) 00:14:48.717 fused_ordering(125) 00:14:48.717 fused_ordering(126) 00:14:48.717 fused_ordering(127) 00:14:48.717 fused_ordering(128) 00:14:48.717 fused_ordering(129) 00:14:48.717 fused_ordering(130) 00:14:48.717 fused_ordering(131) 00:14:48.717 fused_ordering(132) 00:14:48.717 fused_ordering(133) 00:14:48.717 fused_ordering(134) 00:14:48.717 fused_ordering(135) 00:14:48.717 fused_ordering(136) 00:14:48.717 fused_ordering(137) 00:14:48.717 fused_ordering(138) 00:14:48.717 fused_ordering(139) 00:14:48.717 fused_ordering(140) 00:14:48.717 fused_ordering(141) 00:14:48.717 fused_ordering(142) 00:14:48.717 fused_ordering(143) 00:14:48.717 fused_ordering(144) 00:14:48.717 fused_ordering(145) 00:14:48.717 fused_ordering(146) 00:14:48.717 fused_ordering(147) 00:14:48.717 fused_ordering(148) 00:14:48.717 fused_ordering(149) 00:14:48.717 fused_ordering(150) 00:14:48.717 fused_ordering(151) 00:14:48.717 fused_ordering(152) 00:14:48.717 fused_ordering(153) 00:14:48.717 fused_ordering(154) 00:14:48.717 fused_ordering(155) 00:14:48.717 fused_ordering(156) 00:14:48.717 fused_ordering(157) 00:14:48.717 fused_ordering(158) 00:14:48.717 fused_ordering(159) 00:14:48.717 fused_ordering(160) 00:14:48.717 fused_ordering(161) 00:14:48.717 fused_ordering(162) 00:14:48.717 fused_ordering(163) 00:14:48.717 fused_ordering(164) 00:14:48.717 fused_ordering(165) 00:14:48.717 fused_ordering(166) 00:14:48.717 fused_ordering(167) 00:14:48.717 fused_ordering(168) 00:14:48.717 fused_ordering(169) 00:14:48.717 fused_ordering(170) 00:14:48.717 fused_ordering(171) 00:14:48.717 fused_ordering(172) 00:14:48.717 fused_ordering(173) 00:14:48.717 fused_ordering(174) 00:14:48.717 fused_ordering(175) 00:14:48.717 fused_ordering(176) 00:14:48.717 fused_ordering(177) 00:14:48.717 fused_ordering(178) 00:14:48.717 fused_ordering(179) 00:14:48.717 fused_ordering(180) 00:14:48.717 fused_ordering(181) 00:14:48.717 fused_ordering(182) 00:14:48.717 fused_ordering(183) 00:14:48.717 fused_ordering(184) 00:14:48.717 fused_ordering(185) 00:14:48.717 fused_ordering(186) 00:14:48.717 fused_ordering(187) 00:14:48.717 fused_ordering(188) 00:14:48.717 fused_ordering(189) 00:14:48.717 fused_ordering(190) 00:14:48.717 fused_ordering(191) 00:14:48.717 fused_ordering(192) 00:14:48.717 fused_ordering(193) 00:14:48.717 fused_ordering(194) 00:14:48.717 fused_ordering(195) 00:14:48.717 fused_ordering(196) 00:14:48.717 fused_ordering(197) 00:14:48.717 fused_ordering(198) 00:14:48.717 fused_ordering(199) 00:14:48.717 fused_ordering(200) 00:14:48.717 fused_ordering(201) 00:14:48.717 fused_ordering(202) 00:14:48.717 fused_ordering(203) 00:14:48.717 fused_ordering(204) 00:14:48.717 fused_ordering(205) 00:14:49.649 fused_ordering(206) 00:14:49.649 fused_ordering(207) 00:14:49.649 fused_ordering(208) 00:14:49.649 fused_ordering(209) 00:14:49.649 fused_ordering(210) 00:14:49.649 fused_ordering(211) 00:14:49.649 fused_ordering(212) 00:14:49.649 fused_ordering(213) 00:14:49.649 fused_ordering(214) 00:14:49.649 fused_ordering(215) 00:14:49.649 fused_ordering(216) 00:14:49.649 fused_ordering(217) 00:14:49.649 fused_ordering(218) 00:14:49.649 fused_ordering(219) 00:14:49.649 fused_ordering(220) 00:14:49.649 fused_ordering(221) 00:14:49.649 fused_ordering(222) 00:14:49.649 fused_ordering(223) 00:14:49.649 fused_ordering(224) 00:14:49.649 fused_ordering(225) 00:14:49.649 fused_ordering(226) 00:14:49.649 fused_ordering(227) 00:14:49.649 fused_ordering(228) 00:14:49.649 fused_ordering(229) 00:14:49.649 fused_ordering(230) 00:14:49.649 fused_ordering(231) 00:14:49.649 fused_ordering(232) 00:14:49.649 fused_ordering(233) 00:14:49.649 fused_ordering(234) 00:14:49.649 fused_ordering(235) 00:14:49.649 fused_ordering(236) 00:14:49.649 fused_ordering(237) 00:14:49.649 fused_ordering(238) 00:14:49.649 fused_ordering(239) 00:14:49.649 fused_ordering(240) 00:14:49.649 fused_ordering(241) 00:14:49.649 fused_ordering(242) 00:14:49.649 fused_ordering(243) 00:14:49.649 fused_ordering(244) 00:14:49.649 fused_ordering(245) 00:14:49.649 fused_ordering(246) 00:14:49.649 fused_ordering(247) 00:14:49.649 fused_ordering(248) 00:14:49.649 fused_ordering(249) 00:14:49.649 fused_ordering(250) 00:14:49.649 fused_ordering(251) 00:14:49.649 fused_ordering(252) 00:14:49.649 fused_ordering(253) 00:14:49.649 fused_ordering(254) 00:14:49.649 fused_ordering(255) 00:14:49.649 fused_ordering(256) 00:14:49.649 fused_ordering(257) 00:14:49.649 fused_ordering(258) 00:14:49.649 fused_ordering(259) 00:14:49.649 fused_ordering(260) 00:14:49.649 fused_ordering(261) 00:14:49.649 fused_ordering(262) 00:14:49.649 fused_ordering(263) 00:14:49.649 fused_ordering(264) 00:14:49.649 fused_ordering(265) 00:14:49.649 fused_ordering(266) 00:14:49.649 fused_ordering(267) 00:14:49.649 fused_ordering(268) 00:14:49.649 fused_ordering(269) 00:14:49.649 fused_ordering(270) 00:14:49.649 fused_ordering(271) 00:14:49.649 fused_ordering(272) 00:14:49.649 fused_ordering(273) 00:14:49.649 fused_ordering(274) 00:14:49.649 fused_ordering(275) 00:14:49.649 fused_ordering(276) 00:14:49.649 fused_ordering(277) 00:14:49.649 fused_ordering(278) 00:14:49.649 fused_ordering(279) 00:14:49.649 fused_ordering(280) 00:14:49.649 fused_ordering(281) 00:14:49.649 fused_ordering(282) 00:14:49.649 fused_ordering(283) 00:14:49.649 fused_ordering(284) 00:14:49.649 fused_ordering(285) 00:14:49.649 fused_ordering(286) 00:14:49.649 fused_ordering(287) 00:14:49.649 fused_ordering(288) 00:14:49.649 fused_ordering(289) 00:14:49.649 fused_ordering(290) 00:14:49.649 fused_ordering(291) 00:14:49.649 fused_ordering(292) 00:14:49.649 fused_ordering(293) 00:14:49.649 fused_ordering(294) 00:14:49.649 fused_ordering(295) 00:14:49.649 fused_ordering(296) 00:14:49.649 fused_ordering(297) 00:14:49.649 fused_ordering(298) 00:14:49.649 fused_ordering(299) 00:14:49.649 fused_ordering(300) 00:14:49.649 fused_ordering(301) 00:14:49.649 fused_ordering(302) 00:14:49.649 fused_ordering(303) 00:14:49.649 fused_ordering(304) 00:14:49.649 fused_ordering(305) 00:14:49.649 fused_ordering(306) 00:14:49.649 fused_ordering(307) 00:14:49.649 fused_ordering(308) 00:14:49.649 fused_ordering(309) 00:14:49.649 fused_ordering(310) 00:14:49.649 fused_ordering(311) 00:14:49.649 fused_ordering(312) 00:14:49.649 fused_ordering(313) 00:14:49.649 fused_ordering(314) 00:14:49.649 fused_ordering(315) 00:14:49.649 fused_ordering(316) 00:14:49.649 fused_ordering(317) 00:14:49.649 fused_ordering(318) 00:14:49.649 fused_ordering(319) 00:14:49.649 fused_ordering(320) 00:14:49.649 fused_ordering(321) 00:14:49.649 fused_ordering(322) 00:14:49.649 fused_ordering(323) 00:14:49.649 fused_ordering(324) 00:14:49.649 fused_ordering(325) 00:14:49.649 fused_ordering(326) 00:14:49.649 fused_ordering(327) 00:14:49.649 fused_ordering(328) 00:14:49.649 fused_ordering(329) 00:14:49.649 fused_ordering(330) 00:14:49.649 fused_ordering(331) 00:14:49.649 fused_ordering(332) 00:14:49.649 fused_ordering(333) 00:14:49.649 fused_ordering(334) 00:14:49.649 fused_ordering(335) 00:14:49.649 fused_ordering(336) 00:14:49.649 fused_ordering(337) 00:14:49.649 fused_ordering(338) 00:14:49.649 fused_ordering(339) 00:14:49.649 fused_ordering(340) 00:14:49.649 fused_ordering(341) 00:14:49.649 fused_ordering(342) 00:14:49.649 fused_ordering(343) 00:14:49.649 fused_ordering(344) 00:14:49.649 fused_ordering(345) 00:14:49.649 fused_ordering(346) 00:14:49.649 fused_ordering(347) 00:14:49.649 fused_ordering(348) 00:14:49.649 fused_ordering(349) 00:14:49.649 fused_ordering(350) 00:14:49.649 fused_ordering(351) 00:14:49.649 fused_ordering(352) 00:14:49.649 fused_ordering(353) 00:14:49.649 fused_ordering(354) 00:14:49.649 fused_ordering(355) 00:14:49.649 fused_ordering(356) 00:14:49.649 fused_ordering(357) 00:14:49.649 fused_ordering(358) 00:14:49.649 fused_ordering(359) 00:14:49.649 fused_ordering(360) 00:14:49.649 fused_ordering(361) 00:14:49.649 fused_ordering(362) 00:14:49.649 fused_ordering(363) 00:14:49.649 fused_ordering(364) 00:14:49.649 fused_ordering(365) 00:14:49.649 fused_ordering(366) 00:14:49.649 fused_ordering(367) 00:14:49.649 fused_ordering(368) 00:14:49.649 fused_ordering(369) 00:14:49.649 fused_ordering(370) 00:14:49.649 fused_ordering(371) 00:14:49.649 fused_ordering(372) 00:14:49.649 fused_ordering(373) 00:14:49.649 fused_ordering(374) 00:14:49.649 fused_ordering(375) 00:14:49.649 fused_ordering(376) 00:14:49.649 fused_ordering(377) 00:14:49.649 fused_ordering(378) 00:14:49.649 fused_ordering(379) 00:14:49.649 fused_ordering(380) 00:14:49.649 fused_ordering(381) 00:14:49.649 fused_ordering(382) 00:14:49.649 fused_ordering(383) 00:14:49.649 fused_ordering(384) 00:14:49.649 fused_ordering(385) 00:14:49.649 fused_ordering(386) 00:14:49.649 fused_ordering(387) 00:14:49.649 fused_ordering(388) 00:14:49.649 fused_ordering(389) 00:14:49.649 fused_ordering(390) 00:14:49.649 fused_ordering(391) 00:14:49.649 fused_ordering(392) 00:14:49.649 fused_ordering(393) 00:14:49.649 fused_ordering(394) 00:14:49.649 fused_ordering(395) 00:14:49.649 fused_ordering(396) 00:14:49.649 fused_ordering(397) 00:14:49.649 fused_ordering(398) 00:14:49.649 fused_ordering(399) 00:14:49.649 fused_ordering(400) 00:14:49.649 fused_ordering(401) 00:14:49.649 fused_ordering(402) 00:14:49.649 fused_ordering(403) 00:14:49.649 fused_ordering(404) 00:14:49.649 fused_ordering(405) 00:14:49.649 fused_ordering(406) 00:14:49.649 fused_ordering(407) 00:14:49.649 fused_ordering(408) 00:14:49.649 fused_ordering(409) 00:14:49.649 fused_ordering(410) 00:14:50.579 fused_ordering(411) 00:14:50.579 fused_ordering(412) 00:14:50.579 fused_ordering(413) 00:14:50.579 fused_ordering(414) 00:14:50.579 fused_ordering(415) 00:14:50.579 fused_ordering(416) 00:14:50.579 fused_ordering(417) 00:14:50.579 fused_ordering(418) 00:14:50.579 fused_ordering(419) 00:14:50.579 fused_ordering(420) 00:14:50.579 fused_ordering(421) 00:14:50.579 fused_ordering(422) 00:14:50.579 fused_ordering(423) 00:14:50.579 fused_ordering(424) 00:14:50.579 fused_ordering(425) 00:14:50.579 fused_ordering(426) 00:14:50.579 fused_ordering(427) 00:14:50.579 fused_ordering(428) 00:14:50.579 fused_ordering(429) 00:14:50.579 fused_ordering(430) 00:14:50.579 fused_ordering(431) 00:14:50.579 fused_ordering(432) 00:14:50.579 fused_ordering(433) 00:14:50.579 fused_ordering(434) 00:14:50.579 fused_ordering(435) 00:14:50.579 fused_ordering(436) 00:14:50.579 fused_ordering(437) 00:14:50.579 fused_ordering(438) 00:14:50.579 fused_ordering(439) 00:14:50.580 fused_ordering(440) 00:14:50.580 fused_ordering(441) 00:14:50.580 fused_ordering(442) 00:14:50.580 fused_ordering(443) 00:14:50.580 fused_ordering(444) 00:14:50.580 fused_ordering(445) 00:14:50.580 fused_ordering(446) 00:14:50.580 fused_ordering(447) 00:14:50.580 fused_ordering(448) 00:14:50.580 fused_ordering(449) 00:14:50.580 fused_ordering(450) 00:14:50.580 fused_ordering(451) 00:14:50.580 fused_ordering(452) 00:14:50.580 fused_ordering(453) 00:14:50.580 fused_ordering(454) 00:14:50.580 fused_ordering(455) 00:14:50.580 fused_ordering(456) 00:14:50.580 fused_ordering(457) 00:14:50.580 fused_ordering(458) 00:14:50.580 fused_ordering(459) 00:14:50.580 fused_ordering(460) 00:14:50.580 fused_ordering(461) 00:14:50.580 fused_ordering(462) 00:14:50.580 fused_ordering(463) 00:14:50.580 fused_ordering(464) 00:14:50.580 fused_ordering(465) 00:14:50.580 fused_ordering(466) 00:14:50.580 fused_ordering(467) 00:14:50.580 fused_ordering(468) 00:14:50.580 fused_ordering(469) 00:14:50.580 fused_ordering(470) 00:14:50.580 fused_ordering(471) 00:14:50.580 fused_ordering(472) 00:14:50.580 fused_ordering(473) 00:14:50.580 fused_ordering(474) 00:14:50.580 fused_ordering(475) 00:14:50.580 fused_ordering(476) 00:14:50.580 fused_ordering(477) 00:14:50.580 fused_ordering(478) 00:14:50.580 fused_ordering(479) 00:14:50.580 fused_ordering(480) 00:14:50.580 fused_ordering(481) 00:14:50.580 fused_ordering(482) 00:14:50.580 fused_ordering(483) 00:14:50.580 fused_ordering(484) 00:14:50.580 fused_ordering(485) 00:14:50.580 fused_ordering(486) 00:14:50.580 fused_ordering(487) 00:14:50.580 fused_ordering(488) 00:14:50.580 fused_ordering(489) 00:14:50.580 fused_ordering(490) 00:14:50.580 fused_ordering(491) 00:14:50.580 fused_ordering(492) 00:14:50.580 fused_ordering(493) 00:14:50.580 fused_ordering(494) 00:14:50.580 fused_ordering(495) 00:14:50.580 fused_ordering(496) 00:14:50.580 fused_ordering(497) 00:14:50.580 fused_ordering(498) 00:14:50.580 fused_ordering(499) 00:14:50.580 fused_ordering(500) 00:14:50.580 fused_ordering(501) 00:14:50.580 fused_ordering(502) 00:14:50.580 fused_ordering(503) 00:14:50.580 fused_ordering(504) 00:14:50.580 fused_ordering(505) 00:14:50.580 fused_ordering(506) 00:14:50.580 fused_ordering(507) 00:14:50.580 fused_ordering(508) 00:14:50.580 fused_ordering(509) 00:14:50.580 fused_ordering(510) 00:14:50.580 fused_ordering(511) 00:14:50.580 fused_ordering(512) 00:14:50.580 fused_ordering(513) 00:14:50.580 fused_ordering(514) 00:14:50.580 fused_ordering(515) 00:14:50.580 fused_ordering(516) 00:14:50.580 fused_ordering(517) 00:14:50.580 fused_ordering(518) 00:14:50.580 fused_ordering(519) 00:14:50.580 fused_ordering(520) 00:14:50.580 fused_ordering(521) 00:14:50.580 fused_ordering(522) 00:14:50.580 fused_ordering(523) 00:14:50.580 fused_ordering(524) 00:14:50.580 fused_ordering(525) 00:14:50.580 fused_ordering(526) 00:14:50.580 fused_ordering(527) 00:14:50.580 fused_ordering(528) 00:14:50.580 fused_ordering(529) 00:14:50.580 fused_ordering(530) 00:14:50.580 fused_ordering(531) 00:14:50.580 fused_ordering(532) 00:14:50.580 fused_ordering(533) 00:14:50.580 fused_ordering(534) 00:14:50.580 fused_ordering(535) 00:14:50.580 fused_ordering(536) 00:14:50.580 fused_ordering(537) 00:14:50.580 fused_ordering(538) 00:14:50.580 fused_ordering(539) 00:14:50.580 fused_ordering(540) 00:14:50.580 fused_ordering(541) 00:14:50.580 fused_ordering(542) 00:14:50.580 fused_ordering(543) 00:14:50.580 fused_ordering(544) 00:14:50.580 fused_ordering(545) 00:14:50.580 fused_ordering(546) 00:14:50.580 fused_ordering(547) 00:14:50.580 fused_ordering(548) 00:14:50.580 fused_ordering(549) 00:14:50.580 fused_ordering(550) 00:14:50.580 fused_ordering(551) 00:14:50.580 fused_ordering(552) 00:14:50.580 fused_ordering(553) 00:14:50.580 fused_ordering(554) 00:14:50.580 fused_ordering(555) 00:14:50.580 fused_ordering(556) 00:14:50.580 fused_ordering(557) 00:14:50.580 fused_ordering(558) 00:14:50.580 fused_ordering(559) 00:14:50.580 fused_ordering(560) 00:14:50.580 fused_ordering(561) 00:14:50.580 fused_ordering(562) 00:14:50.580 fused_ordering(563) 00:14:50.580 fused_ordering(564) 00:14:50.580 fused_ordering(565) 00:14:50.580 fused_ordering(566) 00:14:50.580 fused_ordering(567) 00:14:50.580 fused_ordering(568) 00:14:50.580 fused_ordering(569) 00:14:50.580 fused_ordering(570) 00:14:50.580 fused_ordering(571) 00:14:50.580 fused_ordering(572) 00:14:50.580 fused_ordering(573) 00:14:50.580 fused_ordering(574) 00:14:50.580 fused_ordering(575) 00:14:50.580 fused_ordering(576) 00:14:50.580 fused_ordering(577) 00:14:50.580 fused_ordering(578) 00:14:50.580 fused_ordering(579) 00:14:50.580 fused_ordering(580) 00:14:50.580 fused_ordering(581) 00:14:50.580 fused_ordering(582) 00:14:50.580 fused_ordering(583) 00:14:50.580 fused_ordering(584) 00:14:50.580 fused_ordering(585) 00:14:50.580 fused_ordering(586) 00:14:50.580 fused_ordering(587) 00:14:50.580 fused_ordering(588) 00:14:50.580 fused_ordering(589) 00:14:50.580 fused_ordering(590) 00:14:50.580 fused_ordering(591) 00:14:50.580 fused_ordering(592) 00:14:50.580 fused_ordering(593) 00:14:50.580 fused_ordering(594) 00:14:50.580 fused_ordering(595) 00:14:50.580 fused_ordering(596) 00:14:50.580 fused_ordering(597) 00:14:50.580 fused_ordering(598) 00:14:50.580 fused_ordering(599) 00:14:50.580 fused_ordering(600) 00:14:50.580 fused_ordering(601) 00:14:50.580 fused_ordering(602) 00:14:50.580 fused_ordering(603) 00:14:50.580 fused_ordering(604) 00:14:50.580 fused_ordering(605) 00:14:50.580 fused_ordering(606) 00:14:50.580 fused_ordering(607) 00:14:50.580 fused_ordering(608) 00:14:50.580 fused_ordering(609) 00:14:50.580 fused_ordering(610) 00:14:50.580 fused_ordering(611) 00:14:50.580 fused_ordering(612) 00:14:50.580 fused_ordering(613) 00:14:50.580 fused_ordering(614) 00:14:50.580 fused_ordering(615) 00:14:51.950 fused_ordering(616) 00:14:51.950 fused_ordering(617) 00:14:51.950 fused_ordering(618) 00:14:51.950 fused_ordering(619) 00:14:51.950 fused_ordering(620) 00:14:51.950 fused_ordering(621) 00:14:51.950 fused_ordering(622) 00:14:51.950 fused_ordering(623) 00:14:51.950 fused_ordering(624) 00:14:51.950 fused_ordering(625) 00:14:51.950 fused_ordering(626) 00:14:51.950 fused_ordering(627) 00:14:51.950 fused_ordering(628) 00:14:51.950 fused_ordering(629) 00:14:51.950 fused_ordering(630) 00:14:51.950 fused_ordering(631) 00:14:51.950 fused_ordering(632) 00:14:51.950 fused_ordering(633) 00:14:51.950 fused_ordering(634) 00:14:51.950 fused_ordering(635) 00:14:51.950 fused_ordering(636) 00:14:51.950 fused_ordering(637) 00:14:51.950 fused_ordering(638) 00:14:51.950 fused_ordering(639) 00:14:51.950 fused_ordering(640) 00:14:51.950 fused_ordering(641) 00:14:51.950 fused_ordering(642) 00:14:51.950 fused_ordering(643) 00:14:51.950 fused_ordering(644) 00:14:51.950 fused_ordering(645) 00:14:51.950 fused_ordering(646) 00:14:51.950 fused_ordering(647) 00:14:51.950 fused_ordering(648) 00:14:51.950 fused_ordering(649) 00:14:51.950 fused_ordering(650) 00:14:51.950 fused_ordering(651) 00:14:51.950 fused_ordering(652) 00:14:51.950 fused_ordering(653) 00:14:51.950 fused_ordering(654) 00:14:51.950 fused_ordering(655) 00:14:51.950 fused_ordering(656) 00:14:51.950 fused_ordering(657) 00:14:51.950 fused_ordering(658) 00:14:51.950 fused_ordering(659) 00:14:51.950 fused_ordering(660) 00:14:51.950 fused_ordering(661) 00:14:51.950 fused_ordering(662) 00:14:51.950 fused_ordering(663) 00:14:51.950 fused_ordering(664) 00:14:51.950 fused_ordering(665) 00:14:51.950 fused_ordering(666) 00:14:51.950 fused_ordering(667) 00:14:51.950 fused_ordering(668) 00:14:51.950 fused_ordering(669) 00:14:51.950 fused_ordering(670) 00:14:51.950 fused_ordering(671) 00:14:51.950 fused_ordering(672) 00:14:51.950 fused_ordering(673) 00:14:51.950 fused_ordering(674) 00:14:51.950 fused_ordering(675) 00:14:51.950 fused_ordering(676) 00:14:51.950 fused_ordering(677) 00:14:51.950 fused_ordering(678) 00:14:51.950 fused_ordering(679) 00:14:51.950 fused_ordering(680) 00:14:51.950 fused_ordering(681) 00:14:51.950 fused_ordering(682) 00:14:51.950 fused_ordering(683) 00:14:51.950 fused_ordering(684) 00:14:51.950 fused_ordering(685) 00:14:51.950 fused_ordering(686) 00:14:51.950 fused_ordering(687) 00:14:51.950 fused_ordering(688) 00:14:51.950 fused_ordering(689) 00:14:51.950 fused_ordering(690) 00:14:51.950 fused_ordering(691) 00:14:51.950 fused_ordering(692) 00:14:51.950 fused_ordering(693) 00:14:51.950 fused_ordering(694) 00:14:51.950 fused_ordering(695) 00:14:51.950 fused_ordering(696) 00:14:51.950 fused_ordering(697) 00:14:51.950 fused_ordering(698) 00:14:51.950 fused_ordering(699) 00:14:51.950 fused_ordering(700) 00:14:51.950 fused_ordering(701) 00:14:51.950 fused_ordering(702) 00:14:51.950 fused_ordering(703) 00:14:51.950 fused_ordering(704) 00:14:51.950 fused_ordering(705) 00:14:51.950 fused_ordering(706) 00:14:51.950 fused_ordering(707) 00:14:51.950 fused_ordering(708) 00:14:51.950 fused_ordering(709) 00:14:51.950 fused_ordering(710) 00:14:51.950 fused_ordering(711) 00:14:51.951 fused_ordering(712) 00:14:51.951 fused_ordering(713) 00:14:51.951 fused_ordering(714) 00:14:51.951 fused_ordering(715) 00:14:51.951 fused_ordering(716) 00:14:51.951 fused_ordering(717) 00:14:51.951 fused_ordering(718) 00:14:51.951 fused_ordering(719) 00:14:51.951 fused_ordering(720) 00:14:51.951 fused_ordering(721) 00:14:51.951 fused_ordering(722) 00:14:51.951 fused_ordering(723) 00:14:51.951 fused_ordering(724) 00:14:51.951 fused_ordering(725) 00:14:51.951 fused_ordering(726) 00:14:51.951 fused_ordering(727) 00:14:51.951 fused_ordering(728) 00:14:51.951 fused_ordering(729) 00:14:51.951 fused_ordering(730) 00:14:51.951 fused_ordering(731) 00:14:51.951 fused_ordering(732) 00:14:51.951 fused_ordering(733) 00:14:51.951 fused_ordering(734) 00:14:51.951 fused_ordering(735) 00:14:51.951 fused_ordering(736) 00:14:51.951 fused_ordering(737) 00:14:51.951 fused_ordering(738) 00:14:51.951 fused_ordering(739) 00:14:51.951 fused_ordering(740) 00:14:51.951 fused_ordering(741) 00:14:51.951 fused_ordering(742) 00:14:51.951 fused_ordering(743) 00:14:51.951 fused_ordering(744) 00:14:51.951 fused_ordering(745) 00:14:51.951 fused_ordering(746) 00:14:51.951 fused_ordering(747) 00:14:51.951 fused_ordering(748) 00:14:51.951 fused_ordering(749) 00:14:51.951 fused_ordering(750) 00:14:51.951 fused_ordering(751) 00:14:51.951 fused_ordering(752) 00:14:51.951 fused_ordering(753) 00:14:51.951 fused_ordering(754) 00:14:51.951 fused_ordering(755) 00:14:51.951 fused_ordering(756) 00:14:51.951 fused_ordering(757) 00:14:51.951 fused_ordering(758) 00:14:51.951 fused_ordering(759) 00:14:51.951 fused_ordering(760) 00:14:51.951 fused_ordering(761) 00:14:51.951 fused_ordering(762) 00:14:51.951 fused_ordering(763) 00:14:51.951 fused_ordering(764) 00:14:51.951 fused_ordering(765) 00:14:51.951 fused_ordering(766) 00:14:51.951 fused_ordering(767) 00:14:51.951 fused_ordering(768) 00:14:51.951 fused_ordering(769) 00:14:51.951 fused_ordering(770) 00:14:51.951 fused_ordering(771) 00:14:51.951 fused_ordering(772) 00:14:51.951 fused_ordering(773) 00:14:51.951 fused_ordering(774) 00:14:51.951 fused_ordering(775) 00:14:51.951 fused_ordering(776) 00:14:51.951 fused_ordering(777) 00:14:51.951 fused_ordering(778) 00:14:51.951 fused_ordering(779) 00:14:51.951 fused_ordering(780) 00:14:51.951 fused_ordering(781) 00:14:51.951 fused_ordering(782) 00:14:51.951 fused_ordering(783) 00:14:51.951 fused_ordering(784) 00:14:51.951 fused_ordering(785) 00:14:51.951 fused_ordering(786) 00:14:51.951 fused_ordering(787) 00:14:51.951 fused_ordering(788) 00:14:51.951 fused_ordering(789) 00:14:51.951 fused_ordering(790) 00:14:51.951 fused_ordering(791) 00:14:51.951 fused_ordering(792) 00:14:51.951 fused_ordering(793) 00:14:51.951 fused_ordering(794) 00:14:51.951 fused_ordering(795) 00:14:51.951 fused_ordering(796) 00:14:51.951 fused_ordering(797) 00:14:51.951 fused_ordering(798) 00:14:51.951 fused_ordering(799) 00:14:51.951 fused_ordering(800) 00:14:51.951 fused_ordering(801) 00:14:51.951 fused_ordering(802) 00:14:51.951 fused_ordering(803) 00:14:51.951 fused_ordering(804) 00:14:51.951 fused_ordering(805) 00:14:51.951 fused_ordering(806) 00:14:51.951 fused_ordering(807) 00:14:51.951 fused_ordering(808) 00:14:51.951 fused_ordering(809) 00:14:51.951 fused_ordering(810) 00:14:51.951 fused_ordering(811) 00:14:51.951 fused_ordering(812) 00:14:51.951 fused_ordering(813) 00:14:51.951 fused_ordering(814) 00:14:51.951 fused_ordering(815) 00:14:51.951 fused_ordering(816) 00:14:51.951 fused_ordering(817) 00:14:51.951 fused_ordering(818) 00:14:51.951 fused_ordering(819) 00:14:51.951 fused_ordering(820) 00:14:52.883 fused_ordering(821) 00:14:52.883 fused_ordering(822) 00:14:52.883 fused_ordering(823) 00:14:52.883 fused_ordering(824) 00:14:52.883 fused_ordering(825) 00:14:52.883 fused_ordering(826) 00:14:52.883 fused_ordering(827) 00:14:52.883 fused_ordering(828) 00:14:52.883 fused_ordering(829) 00:14:52.883 fused_ordering(830) 00:14:52.883 fused_ordering(831) 00:14:52.883 fused_ordering(832) 00:14:52.883 fused_ordering(833) 00:14:52.883 fused_ordering(834) 00:14:52.883 fused_ordering(835) 00:14:52.883 fused_ordering(836) 00:14:52.883 fused_ordering(837) 00:14:52.883 fused_ordering(838) 00:14:52.883 fused_ordering(839) 00:14:52.883 fused_ordering(840) 00:14:52.883 fused_ordering(841) 00:14:52.883 fused_ordering(842) 00:14:52.883 fused_ordering(843) 00:14:52.883 fused_ordering(844) 00:14:52.883 fused_ordering(845) 00:14:52.883 fused_ordering(846) 00:14:52.883 fused_ordering(847) 00:14:52.883 fused_ordering(848) 00:14:52.883 fused_ordering(849) 00:14:52.883 fused_ordering(850) 00:14:52.883 fused_ordering(851) 00:14:52.883 fused_ordering(852) 00:14:52.883 fused_ordering(853) 00:14:52.883 fused_ordering(854) 00:14:52.883 fused_ordering(855) 00:14:52.883 fused_ordering(856) 00:14:52.883 fused_ordering(857) 00:14:52.883 fused_ordering(858) 00:14:52.883 fused_ordering(859) 00:14:52.883 fused_ordering(860) 00:14:52.883 fused_ordering(861) 00:14:52.883 fused_ordering(862) 00:14:52.883 fused_ordering(863) 00:14:52.883 fused_ordering(864) 00:14:52.883 fused_ordering(865) 00:14:52.883 fused_ordering(866) 00:14:52.883 fused_ordering(867) 00:14:52.883 fused_ordering(868) 00:14:52.883 fused_ordering(869) 00:14:52.883 fused_ordering(870) 00:14:52.883 fused_ordering(871) 00:14:52.883 fused_ordering(872) 00:14:52.883 fused_ordering(873) 00:14:52.883 fused_ordering(874) 00:14:52.883 fused_ordering(875) 00:14:52.883 fused_ordering(876) 00:14:52.883 fused_ordering(877) 00:14:52.883 fused_ordering(878) 00:14:52.883 fused_ordering(879) 00:14:52.883 fused_ordering(880) 00:14:52.883 fused_ordering(881) 00:14:52.883 fused_ordering(882) 00:14:52.883 fused_ordering(883) 00:14:52.883 fused_ordering(884) 00:14:52.883 fused_ordering(885) 00:14:52.883 fused_ordering(886) 00:14:52.883 fused_ordering(887) 00:14:52.883 fused_ordering(888) 00:14:52.883 fused_ordering(889) 00:14:52.883 fused_ordering(890) 00:14:52.883 fused_ordering(891) 00:14:52.883 fused_ordering(892) 00:14:52.883 fused_ordering(893) 00:14:52.883 fused_ordering(894) 00:14:52.883 fused_ordering(895) 00:14:52.883 fused_ordering(896) 00:14:52.883 fused_ordering(897) 00:14:52.883 fused_ordering(898) 00:14:52.883 fused_ordering(899) 00:14:52.883 fused_ordering(900) 00:14:52.883 fused_ordering(901) 00:14:52.883 fused_ordering(902) 00:14:52.883 fused_ordering(903) 00:14:52.883 fused_ordering(904) 00:14:52.883 fused_ordering(905) 00:14:52.883 fused_ordering(906) 00:14:52.883 fused_ordering(907) 00:14:52.883 fused_ordering(908) 00:14:52.883 fused_ordering(909) 00:14:52.883 fused_ordering(910) 00:14:52.883 fused_ordering(911) 00:14:52.883 fused_ordering(912) 00:14:52.883 fused_ordering(913) 00:14:52.883 fused_ordering(914) 00:14:52.883 fused_ordering(915) 00:14:52.883 fused_ordering(916) 00:14:52.883 fused_ordering(917) 00:14:52.883 fused_ordering(918) 00:14:52.883 fused_ordering(919) 00:14:52.883 fused_ordering(920) 00:14:52.883 fused_ordering(921) 00:14:52.883 fused_ordering(922) 00:14:52.883 fused_ordering(923) 00:14:52.883 fused_ordering(924) 00:14:52.883 fused_ordering(925) 00:14:52.883 fused_ordering(926) 00:14:52.883 fused_ordering(927) 00:14:52.883 fused_ordering(928) 00:14:52.883 fused_ordering(929) 00:14:52.883 fused_ordering(930) 00:14:52.883 fused_ordering(931) 00:14:52.883 fused_ordering(932) 00:14:52.883 fused_ordering(933) 00:14:52.883 fused_ordering(934) 00:14:52.883 fused_ordering(935) 00:14:52.883 fused_ordering(936) 00:14:52.883 fused_ordering(937) 00:14:52.883 fused_ordering(938) 00:14:52.883 fused_ordering(939) 00:14:52.883 fused_ordering(940) 00:14:52.883 fused_ordering(941) 00:14:52.883 fused_ordering(942) 00:14:52.883 fused_ordering(943) 00:14:52.883 fused_ordering(944) 00:14:52.883 fused_ordering(945) 00:14:52.883 fused_ordering(946) 00:14:52.883 fused_ordering(947) 00:14:52.883 fused_ordering(948) 00:14:52.883 fused_ordering(949) 00:14:52.883 fused_ordering(950) 00:14:52.883 fused_ordering(951) 00:14:52.883 fused_ordering(952) 00:14:52.883 fused_ordering(953) 00:14:52.883 fused_ordering(954) 00:14:52.883 fused_ordering(955) 00:14:52.883 fused_ordering(956) 00:14:52.883 fused_ordering(957) 00:14:52.883 fused_ordering(958) 00:14:52.883 fused_ordering(959) 00:14:52.883 fused_ordering(960) 00:14:52.883 fused_ordering(961) 00:14:52.883 fused_ordering(962) 00:14:52.883 fused_ordering(963) 00:14:52.883 fused_ordering(964) 00:14:52.883 fused_ordering(965) 00:14:52.883 fused_ordering(966) 00:14:52.883 fused_ordering(967) 00:14:52.883 fused_ordering(968) 00:14:52.883 fused_ordering(969) 00:14:52.883 fused_ordering(970) 00:14:52.883 fused_ordering(971) 00:14:52.883 fused_ordering(972) 00:14:52.883 fused_ordering(973) 00:14:52.883 fused_ordering(974) 00:14:52.883 fused_ordering(975) 00:14:52.883 fused_ordering(976) 00:14:52.883 fused_ordering(977) 00:14:52.883 fused_ordering(978) 00:14:52.883 fused_ordering(979) 00:14:52.883 fused_ordering(980) 00:14:52.883 fused_ordering(981) 00:14:52.883 fused_ordering(982) 00:14:52.883 fused_ordering(983) 00:14:52.883 fused_ordering(984) 00:14:52.883 fused_ordering(985) 00:14:52.883 fused_ordering(986) 00:14:52.883 fused_ordering(987) 00:14:52.883 fused_ordering(988) 00:14:52.883 fused_ordering(989) 00:14:52.883 fused_ordering(990) 00:14:52.883 fused_ordering(991) 00:14:52.883 fused_ordering(992) 00:14:52.883 fused_ordering(993) 00:14:52.883 fused_ordering(994) 00:14:52.883 fused_ordering(995) 00:14:52.883 fused_ordering(996) 00:14:52.883 fused_ordering(997) 00:14:52.883 fused_ordering(998) 00:14:52.883 fused_ordering(999) 00:14:52.883 fused_ordering(1000) 00:14:52.883 fused_ordering(1001) 00:14:52.883 fused_ordering(1002) 00:14:52.883 fused_ordering(1003) 00:14:52.883 fused_ordering(1004) 00:14:52.883 fused_ordering(1005) 00:14:52.883 fused_ordering(1006) 00:14:52.883 fused_ordering(1007) 00:14:52.883 fused_ordering(1008) 00:14:52.883 fused_ordering(1009) 00:14:52.883 fused_ordering(1010) 00:14:52.883 fused_ordering(1011) 00:14:52.883 fused_ordering(1012) 00:14:52.883 fused_ordering(1013) 00:14:52.883 fused_ordering(1014) 00:14:52.883 fused_ordering(1015) 00:14:52.883 fused_ordering(1016) 00:14:52.883 fused_ordering(1017) 00:14:52.883 fused_ordering(1018) 00:14:52.883 fused_ordering(1019) 00:14:52.883 fused_ordering(1020) 00:14:52.883 fused_ordering(1021) 00:14:52.883 fused_ordering(1022) 00:14:52.883 fused_ordering(1023) 00:14:52.883 17:07:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:52.883 17:07:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:52.883 17:07:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.883 17:07:08 -- nvmf/common.sh@116 -- # sync 00:14:52.883 17:07:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.883 17:07:08 -- nvmf/common.sh@119 -- # set +e 00:14:52.883 17:07:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.883 17:07:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.883 rmmod nvme_tcp 00:14:52.883 rmmod nvme_fabrics 00:14:52.883 rmmod nvme_keyring 00:14:52.883 17:07:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.883 17:07:08 -- nvmf/common.sh@123 -- # set -e 00:14:52.883 17:07:08 -- nvmf/common.sh@124 -- # return 0 00:14:52.883 17:07:08 -- nvmf/common.sh@477 -- # '[' -n 505208 ']' 00:14:52.883 17:07:08 -- nvmf/common.sh@478 -- # killprocess 505208 00:14:52.883 17:07:08 -- common/autotest_common.sh@926 -- # '[' -z 505208 ']' 00:14:52.883 17:07:08 -- common/autotest_common.sh@930 -- # kill -0 505208 00:14:52.883 17:07:08 -- common/autotest_common.sh@931 -- # uname 00:14:52.883 17:07:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.883 17:07:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 505208 00:14:52.883 17:07:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:52.883 17:07:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:52.883 17:07:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 505208' 00:14:52.883 killing process with pid 505208 00:14:52.883 17:07:08 -- common/autotest_common.sh@945 -- # kill 505208 00:14:52.883 17:07:08 -- common/autotest_common.sh@950 -- # wait 505208 00:14:53.141 17:07:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.141 17:07:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.141 17:07:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.141 17:07:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.141 17:07:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.141 17:07:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.141 17:07:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.141 17:07:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.675 17:07:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:55.675 00:14:55.675 real 0m10.916s 00:14:55.675 user 0m9.208s 00:14:55.675 sys 0m5.557s 00:14:55.675 17:07:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.675 17:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.675 ************************************ 00:14:55.675 END TEST nvmf_fused_ordering 00:14:55.675 ************************************ 00:14:55.675 17:07:11 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:55.675 17:07:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.675 17:07:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.675 17:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:55.675 ************************************ 00:14:55.675 START TEST nvmf_delete_subsystem 00:14:55.675 ************************************ 00:14:55.675 17:07:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:55.675 * Looking for test storage... 00:14:55.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.675 17:07:11 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.675 17:07:11 -- nvmf/common.sh@7 -- # uname -s 00:14:55.675 17:07:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.675 17:07:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.675 17:07:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.675 17:07:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.675 17:07:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.675 17:07:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.675 17:07:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.675 17:07:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.675 17:07:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.675 17:07:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.675 17:07:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.675 17:07:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.675 17:07:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.675 17:07:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.675 17:07:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.675 17:07:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.675 17:07:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.675 17:07:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.675 17:07:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.675 17:07:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.675 17:07:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.675 17:07:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.675 17:07:11 -- paths/export.sh@5 -- # export PATH 00:14:55.675 17:07:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.675 17:07:11 -- nvmf/common.sh@46 -- # : 0 00:14:55.675 17:07:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.675 17:07:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.675 17:07:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.675 17:07:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.675 17:07:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.675 17:07:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.675 17:07:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.675 17:07:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.675 17:07:11 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:55.675 17:07:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.675 17:07:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.675 17:07:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.675 17:07:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.675 17:07:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.675 17:07:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.675 17:07:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.675 17:07:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.675 17:07:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:55.675 17:07:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:55.675 17:07:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:55.675 17:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:57.574 17:07:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.574 17:07:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:57.574 17:07:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:57.574 17:07:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:57.574 17:07:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:57.574 17:07:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:57.574 17:07:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:57.574 17:07:13 -- nvmf/common.sh@294 -- # net_devs=() 00:14:57.574 17:07:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:57.574 17:07:13 -- nvmf/common.sh@295 -- # e810=() 00:14:57.574 17:07:13 -- nvmf/common.sh@295 -- # local -ga e810 00:14:57.574 17:07:13 -- nvmf/common.sh@296 -- # x722=() 00:14:57.574 17:07:13 -- nvmf/common.sh@296 -- # local -ga x722 00:14:57.574 17:07:13 -- nvmf/common.sh@297 -- # mlx=() 00:14:57.574 17:07:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:57.574 17:07:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.574 17:07:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:57.574 17:07:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:57.574 17:07:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:57.574 17:07:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.574 17:07:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:57.574 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:57.574 17:07:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.574 17:07:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:57.574 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:57.574 17:07:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.574 17:07:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:57.575 17:07:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.575 17:07:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.575 17:07:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.575 17:07:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.575 17:07:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:57.575 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:57.575 17:07:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.575 17:07:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.575 17:07:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.575 17:07:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.575 17:07:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.575 17:07:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:57.575 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:57.575 17:07:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.575 17:07:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:57.575 17:07:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:57.575 17:07:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:57.575 17:07:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.575 17:07:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.575 17:07:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.575 17:07:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:57.575 17:07:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.575 17:07:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.575 17:07:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:57.575 17:07:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.575 17:07:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.575 17:07:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:57.575 17:07:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:57.575 17:07:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.575 17:07:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.575 17:07:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.575 17:07:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.575 17:07:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:57.575 17:07:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.575 17:07:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.575 17:07:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.575 17:07:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:57.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:14:57.575 00:14:57.575 --- 10.0.0.2 ping statistics --- 00:14:57.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.575 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:14:57.575 17:07:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:14:57.575 00:14:57.575 --- 10.0.0.1 ping statistics --- 00:14:57.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.575 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:14:57.575 17:07:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.575 17:07:13 -- nvmf/common.sh@410 -- # return 0 00:14:57.575 17:07:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.575 17:07:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.575 17:07:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:57.575 17:07:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.575 17:07:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:57.575 17:07:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:57.575 17:07:13 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:57.575 17:07:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.575 17:07:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:57.575 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.575 17:07:13 -- nvmf/common.sh@469 -- # nvmfpid=507865 00:14:57.575 17:07:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:57.575 17:07:13 -- nvmf/common.sh@470 -- # waitforlisten 507865 00:14:57.575 17:07:13 -- common/autotest_common.sh@819 -- # '[' -z 507865 ']' 00:14:57.575 17:07:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.575 17:07:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.575 17:07:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.575 17:07:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.575 17:07:13 -- common/autotest_common.sh@10 -- # set +x 00:14:57.575 [2024-07-20 17:07:13.506575] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:57.575 [2024-07-20 17:07:13.506666] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.575 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.575 [2024-07-20 17:07:13.573123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:57.575 [2024-07-20 17:07:13.660096] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.575 [2024-07-20 17:07:13.660249] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.575 [2024-07-20 17:07:13.660275] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.575 [2024-07-20 17:07:13.660288] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.575 [2024-07-20 17:07:13.660378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.575 [2024-07-20 17:07:13.660383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.506 17:07:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.506 17:07:14 -- common/autotest_common.sh@852 -- # return 0 00:14:58.506 17:07:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.506 17:07:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 17:07:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.506 17:07:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 [2024-07-20 17:07:14.524431] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.506 17:07:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.506 17:07:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 17:07:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.506 17:07:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 [2024-07-20 17:07:14.540605] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.506 17:07:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:58.506 17:07:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 NULL1 00:14:58.506 17:07:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:58.506 17:07:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 Delay0 00:14:58.506 17:07:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.506 17:07:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.506 17:07:14 -- common/autotest_common.sh@10 -- # set +x 00:14:58.506 17:07:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@28 -- # perf_pid=508023 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:58.506 17:07:14 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:58.506 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.506 [2024-07-20 17:07:14.615483] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:01.026 17:07:16 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:01.026 17:07:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.026 17:07:16 -- common/autotest_common.sh@10 -- # set +x 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 [2024-07-20 17:07:16.707787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e0c000c00 is same with the state(5) to be set 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.026 Read completed with error (sct=0, sc=8) 00:15:01.026 starting I/O failed: -6 00:15:01.026 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 starting I/O failed: -6 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 starting I/O failed: -6 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 starting I/O failed: -6 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 starting I/O failed: -6 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 [2024-07-20 17:07:16.708565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2a3f0 is same with the state(5) to be set 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Write completed with error (sct=0, sc=8) 00:15:01.027 Read completed with error (sct=0, sc=8) 00:15:01.591 [2024-07-20 17:07:17.678121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10d70 is same with the state(5) to be set 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 [2024-07-20 17:07:17.706975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2a570 is same with the state(5) to be set 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 [2024-07-20 17:07:17.707141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd12230 is same with the state(5) to be set 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 [2024-07-20 17:07:17.708601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e0c00c480 is same with the state(5) to be set 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Write completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 Read completed with error (sct=0, sc=8) 00:15:01.591 [2024-07-20 17:07:17.709265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2e0c00bf20 is same with the state(5) to be set 00:15:01.591 [2024-07-20 17:07:17.709723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd10d70 (9): Bad file descriptor 00:15:01.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:01.591 17:07:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.591 17:07:17 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:01.591 17:07:17 -- target/delete_subsystem.sh@35 -- # kill -0 508023 00:15:01.591 17:07:17 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:01.591 Initializing NVMe Controllers 00:15:01.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.591 Controller IO queue size 128, less than required. 00:15:01.591 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:01.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:01.592 Initialization complete. Launching workers. 00:15:01.592 ======================================================== 00:15:01.592 Latency(us) 00:15:01.592 Device Information : IOPS MiB/s Average min max 00:15:01.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.20 0.08 914115.40 411.28 1013520.32 00:15:01.592 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.22 0.08 918810.21 611.55 1011225.59 00:15:01.592 ======================================================== 00:15:01.592 Total : 321.42 0.16 916441.07 411.28 1013520.32 00:15:01.592 00:15:02.156 17:07:18 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:02.156 17:07:18 -- target/delete_subsystem.sh@35 -- # kill -0 508023 00:15:02.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (508023) - No such process 00:15:02.156 17:07:18 -- target/delete_subsystem.sh@45 -- # NOT wait 508023 00:15:02.156 17:07:18 -- common/autotest_common.sh@640 -- # local es=0 00:15:02.156 17:07:18 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 508023 00:15:02.156 17:07:18 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:02.156 17:07:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.156 17:07:18 -- common/autotest_common.sh@632 -- # type -t wait 00:15:02.156 17:07:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.156 17:07:18 -- common/autotest_common.sh@643 -- # wait 508023 00:15:02.156 17:07:18 -- common/autotest_common.sh@643 -- # es=1 00:15:02.156 17:07:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:02.156 17:07:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:02.156 17:07:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:02.156 17:07:18 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.156 17:07:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.157 17:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.157 17:07:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.157 17:07:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.157 17:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.157 [2024-07-20 17:07:18.228065] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.157 17:07:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.157 17:07:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.157 17:07:18 -- common/autotest_common.sh@10 -- # set +x 00:15:02.157 17:07:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@54 -- # perf_pid=508564 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:02.157 17:07:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.157 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.157 [2024-07-20 17:07:18.287438] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:02.721 17:07:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.721 17:07:18 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:02.721 17:07:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.285 17:07:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.285 17:07:19 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:03.285 17:07:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.854 17:07:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.854 17:07:19 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:03.854 17:07:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.111 17:07:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.111 17:07:20 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:04.111 17:07:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.676 17:07:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.676 17:07:20 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:04.676 17:07:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.241 17:07:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.241 17:07:21 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:05.241 17:07:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.498 Initializing NVMe Controllers 00:15:05.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.498 Controller IO queue size 128, less than required. 00:15:05.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:05.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:05.498 Initialization complete. Launching workers. 00:15:05.498 ======================================================== 00:15:05.498 Latency(us) 00:15:05.498 Device Information : IOPS MiB/s Average min max 00:15:05.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003497.77 1000272.52 1012944.56 00:15:05.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005328.54 1000295.40 1014129.91 00:15:05.498 ======================================================== 00:15:05.498 Total : 256.00 0.12 1004413.16 1000272.52 1014129.91 00:15:05.498 00:15:05.756 17:07:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.756 17:07:21 -- target/delete_subsystem.sh@57 -- # kill -0 508564 00:15:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (508564) - No such process 00:15:05.756 17:07:21 -- target/delete_subsystem.sh@67 -- # wait 508564 00:15:05.756 17:07:21 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:05.756 17:07:21 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:05.756 17:07:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.756 17:07:21 -- nvmf/common.sh@116 -- # sync 00:15:05.756 17:07:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.756 17:07:21 -- nvmf/common.sh@119 -- # set +e 00:15:05.756 17:07:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.756 17:07:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.756 rmmod nvme_tcp 00:15:05.756 rmmod nvme_fabrics 00:15:05.756 rmmod nvme_keyring 00:15:05.756 17:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.756 17:07:21 -- nvmf/common.sh@123 -- # set -e 00:15:05.756 17:07:21 -- nvmf/common.sh@124 -- # return 0 00:15:05.756 17:07:21 -- nvmf/common.sh@477 -- # '[' -n 507865 ']' 00:15:05.756 17:07:21 -- nvmf/common.sh@478 -- # killprocess 507865 00:15:05.756 17:07:21 -- common/autotest_common.sh@926 -- # '[' -z 507865 ']' 00:15:05.756 17:07:21 -- common/autotest_common.sh@930 -- # kill -0 507865 00:15:05.756 17:07:21 -- common/autotest_common.sh@931 -- # uname 00:15:05.756 17:07:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.756 17:07:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 507865 00:15:05.756 17:07:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:05.756 17:07:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:05.756 17:07:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 507865' 00:15:05.756 killing process with pid 507865 00:15:05.756 17:07:21 -- common/autotest_common.sh@945 -- # kill 507865 00:15:05.756 17:07:21 -- common/autotest_common.sh@950 -- # wait 507865 00:15:06.014 17:07:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:06.014 17:07:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:06.014 17:07:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:06.014 17:07:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.014 17:07:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:06.014 17:07:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.014 17:07:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.014 17:07:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.544 17:07:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:08.544 00:15:08.544 real 0m12.854s 00:15:08.544 user 0m29.222s 00:15:08.544 sys 0m2.906s 00:15:08.544 17:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.544 17:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:08.544 ************************************ 00:15:08.544 END TEST nvmf_delete_subsystem 00:15:08.544 ************************************ 00:15:08.545 17:07:24 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:08.545 17:07:24 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:08.545 17:07:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:08.545 17:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.545 17:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:08.545 ************************************ 00:15:08.545 START TEST nvmf_nvme_cli 00:15:08.545 ************************************ 00:15:08.545 17:07:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:08.545 * Looking for test storage... 00:15:08.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.545 17:07:24 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.545 17:07:24 -- nvmf/common.sh@7 -- # uname -s 00:15:08.545 17:07:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.545 17:07:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.545 17:07:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.545 17:07:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.545 17:07:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.545 17:07:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.545 17:07:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.545 17:07:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.545 17:07:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.545 17:07:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.545 17:07:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.545 17:07:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.545 17:07:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.545 17:07:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.545 17:07:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.545 17:07:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.545 17:07:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.545 17:07:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.545 17:07:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.545 17:07:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.545 17:07:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.545 17:07:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.545 17:07:24 -- paths/export.sh@5 -- # export PATH 00:15:08.545 17:07:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.545 17:07:24 -- nvmf/common.sh@46 -- # : 0 00:15:08.545 17:07:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:08.545 17:07:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:08.545 17:07:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:08.545 17:07:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.545 17:07:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.545 17:07:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:08.545 17:07:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:08.545 17:07:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:08.545 17:07:24 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.545 17:07:24 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.545 17:07:24 -- target/nvme_cli.sh@14 -- # devs=() 00:15:08.545 17:07:24 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:08.545 17:07:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:08.545 17:07:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.545 17:07:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:08.545 17:07:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:08.545 17:07:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:08.545 17:07:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.545 17:07:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.545 17:07:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.545 17:07:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:08.545 17:07:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:08.545 17:07:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:08.545 17:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:10.442 17:07:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:10.442 17:07:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:10.442 17:07:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:10.442 17:07:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:10.442 17:07:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:10.442 17:07:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:10.442 17:07:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:10.442 17:07:26 -- nvmf/common.sh@294 -- # net_devs=() 00:15:10.442 17:07:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:10.442 17:07:26 -- nvmf/common.sh@295 -- # e810=() 00:15:10.442 17:07:26 -- nvmf/common.sh@295 -- # local -ga e810 00:15:10.442 17:07:26 -- nvmf/common.sh@296 -- # x722=() 00:15:10.442 17:07:26 -- nvmf/common.sh@296 -- # local -ga x722 00:15:10.442 17:07:26 -- nvmf/common.sh@297 -- # mlx=() 00:15:10.442 17:07:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:10.442 17:07:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.443 17:07:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:10.443 17:07:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:10.443 17:07:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:10.443 17:07:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:10.443 17:07:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:10.443 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:10.443 17:07:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:10.443 17:07:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:10.443 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:10.443 17:07:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:10.443 17:07:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:10.443 17:07:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.443 17:07:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:10.443 17:07:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.443 17:07:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:10.443 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:10.443 17:07:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.443 17:07:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:10.443 17:07:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.443 17:07:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:10.443 17:07:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.443 17:07:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:10.443 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:10.443 17:07:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.443 17:07:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:10.443 17:07:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:10.443 17:07:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:10.443 17:07:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.443 17:07:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.443 17:07:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.443 17:07:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:10.443 17:07:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.443 17:07:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.443 17:07:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:10.443 17:07:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.443 17:07:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.443 17:07:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:10.443 17:07:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:10.443 17:07:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.443 17:07:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.443 17:07:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.443 17:07:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.443 17:07:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:10.443 17:07:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.443 17:07:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.443 17:07:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.443 17:07:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:10.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:15:10.443 00:15:10.443 --- 10.0.0.2 ping statistics --- 00:15:10.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.443 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:15:10.443 17:07:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:15:10.443 00:15:10.443 --- 10.0.0.1 ping statistics --- 00:15:10.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.443 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:10.443 17:07:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.443 17:07:26 -- nvmf/common.sh@410 -- # return 0 00:15:10.443 17:07:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:10.443 17:07:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.443 17:07:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:10.443 17:07:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.443 17:07:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:10.443 17:07:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:10.443 17:07:26 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:10.443 17:07:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:10.443 17:07:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:10.443 17:07:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.443 17:07:26 -- nvmf/common.sh@469 -- # nvmfpid=510923 00:15:10.443 17:07:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.443 17:07:26 -- nvmf/common.sh@470 -- # waitforlisten 510923 00:15:10.443 17:07:26 -- common/autotest_common.sh@819 -- # '[' -z 510923 ']' 00:15:10.443 17:07:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.443 17:07:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.443 17:07:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.443 17:07:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.443 17:07:26 -- common/autotest_common.sh@10 -- # set +x 00:15:10.443 [2024-07-20 17:07:26.418970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:10.443 [2024-07-20 17:07:26.419046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.443 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.443 [2024-07-20 17:07:26.486726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.443 [2024-07-20 17:07:26.579042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.443 [2024-07-20 17:07:26.579198] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.443 [2024-07-20 17:07:26.579216] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.443 [2024-07-20 17:07:26.579229] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.443 [2024-07-20 17:07:26.579290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.443 [2024-07-20 17:07:26.581812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.443 [2024-07-20 17:07:26.581869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.443 [2024-07-20 17:07:26.581873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.376 17:07:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.376 17:07:27 -- common/autotest_common.sh@852 -- # return 0 00:15:11.376 17:07:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.376 17:07:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:11.376 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.376 17:07:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.376 17:07:27 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.376 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 [2024-07-20 17:07:27.423429] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 Malloc0 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 Malloc1 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 [2024-07-20 17:07:27.509993] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.377 17:07:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.377 17:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:11.377 17:07:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.377 17:07:27 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:11.635 00:15:11.635 Discovery Log Number of Records 2, Generation counter 2 00:15:11.635 =====Discovery Log Entry 0====== 00:15:11.635 trtype: tcp 00:15:11.635 adrfam: ipv4 00:15:11.635 subtype: current discovery subsystem 00:15:11.635 treq: not required 00:15:11.635 portid: 0 00:15:11.635 trsvcid: 4420 00:15:11.635 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:11.635 traddr: 10.0.0.2 00:15:11.635 eflags: explicit discovery connections, duplicate discovery information 00:15:11.635 sectype: none 00:15:11.635 =====Discovery Log Entry 1====== 00:15:11.635 trtype: tcp 00:15:11.635 adrfam: ipv4 00:15:11.635 subtype: nvme subsystem 00:15:11.635 treq: not required 00:15:11.635 portid: 0 00:15:11.635 trsvcid: 4420 00:15:11.635 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:11.635 traddr: 10.0.0.2 00:15:11.635 eflags: none 00:15:11.635 sectype: none 00:15:11.635 17:07:27 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:11.635 17:07:27 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:11.635 17:07:27 -- nvmf/common.sh@510 -- # local dev _ 00:15:11.635 17:07:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.635 17:07:27 -- nvmf/common.sh@509 -- # nvme list 00:15:11.635 17:07:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:11.635 17:07:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.635 17:07:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:11.635 17:07:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.635 17:07:27 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:11.635 17:07:27 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.200 17:07:28 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:12.200 17:07:28 -- common/autotest_common.sh@1177 -- # local i=0 00:15:12.200 17:07:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.200 17:07:28 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:12.200 17:07:28 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:12.200 17:07:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:14.095 17:07:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:14.095 17:07:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:14.095 17:07:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.095 17:07:30 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:14.095 17:07:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.095 17:07:30 -- common/autotest_common.sh@1187 -- # return 0 00:15:14.095 17:07:30 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:14.095 17:07:30 -- nvmf/common.sh@510 -- # local dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@509 -- # nvme list 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:14.095 /dev/nvme0n1 ]] 00:15:14.095 17:07:30 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:14.095 17:07:30 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:14.095 17:07:30 -- nvmf/common.sh@510 -- # local dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@509 -- # nvme list 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:14.095 17:07:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:14.095 17:07:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.095 17:07:30 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:14.095 17:07:30 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.353 17:07:30 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.353 17:07:30 -- common/autotest_common.sh@1198 -- # local i=0 00:15:14.353 17:07:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:14.353 17:07:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.353 17:07:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:14.353 17:07:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.353 17:07:30 -- common/autotest_common.sh@1210 -- # return 0 00:15:14.353 17:07:30 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:14.353 17:07:30 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.353 17:07:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.353 17:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:14.353 17:07:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.353 17:07:30 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:14.353 17:07:30 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:14.353 17:07:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.353 17:07:30 -- nvmf/common.sh@116 -- # sync 00:15:14.353 17:07:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:14.353 17:07:30 -- nvmf/common.sh@119 -- # set +e 00:15:14.353 17:07:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.353 17:07:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:14.353 rmmod nvme_tcp 00:15:14.353 rmmod nvme_fabrics 00:15:14.353 rmmod nvme_keyring 00:15:14.353 17:07:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:14.353 17:07:30 -- nvmf/common.sh@123 -- # set -e 00:15:14.353 17:07:30 -- nvmf/common.sh@124 -- # return 0 00:15:14.353 17:07:30 -- nvmf/common.sh@477 -- # '[' -n 510923 ']' 00:15:14.353 17:07:30 -- nvmf/common.sh@478 -- # killprocess 510923 00:15:14.353 17:07:30 -- common/autotest_common.sh@926 -- # '[' -z 510923 ']' 00:15:14.353 17:07:30 -- common/autotest_common.sh@930 -- # kill -0 510923 00:15:14.353 17:07:30 -- common/autotest_common.sh@931 -- # uname 00:15:14.353 17:07:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.353 17:07:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 510923 00:15:14.353 17:07:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.353 17:07:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.353 17:07:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 510923' 00:15:14.353 killing process with pid 510923 00:15:14.353 17:07:30 -- common/autotest_common.sh@945 -- # kill 510923 00:15:14.353 17:07:30 -- common/autotest_common.sh@950 -- # wait 510923 00:15:14.611 17:07:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.611 17:07:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:14.611 17:07:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:14.611 17:07:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.611 17:07:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:14.611 17:07:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.611 17:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.611 17:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.138 17:07:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:17.138 00:15:17.138 real 0m8.551s 00:15:17.138 user 0m16.898s 00:15:17.138 sys 0m2.159s 00:15:17.138 17:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.138 17:07:32 -- common/autotest_common.sh@10 -- # set +x 00:15:17.138 ************************************ 00:15:17.138 END TEST nvmf_nvme_cli 00:15:17.138 ************************************ 00:15:17.138 17:07:32 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:17.138 17:07:32 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:17.138 17:07:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:17.138 17:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.138 17:07:32 -- common/autotest_common.sh@10 -- # set +x 00:15:17.138 ************************************ 00:15:17.138 START TEST nvmf_vfio_user 00:15:17.138 ************************************ 00:15:17.138 17:07:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:17.138 * Looking for test storage... 00:15:17.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.138 17:07:32 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.138 17:07:32 -- nvmf/common.sh@7 -- # uname -s 00:15:17.138 17:07:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.138 17:07:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.138 17:07:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.138 17:07:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.138 17:07:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.138 17:07:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.138 17:07:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.138 17:07:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.138 17:07:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.138 17:07:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.138 17:07:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.138 17:07:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.138 17:07:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.138 17:07:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.138 17:07:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.138 17:07:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.138 17:07:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.138 17:07:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.139 17:07:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.139 17:07:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.139 17:07:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.139 17:07:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.139 17:07:32 -- paths/export.sh@5 -- # export PATH 00:15:17.139 17:07:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.139 17:07:32 -- nvmf/common.sh@46 -- # : 0 00:15:17.139 17:07:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.139 17:07:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.139 17:07:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.139 17:07:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.139 17:07:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.139 17:07:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.139 17:07:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.139 17:07:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=511752 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 511752' 00:15:17.139 Process pid: 511752 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:17.139 17:07:32 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 511752 00:15:17.139 17:07:32 -- common/autotest_common.sh@819 -- # '[' -z 511752 ']' 00:15:17.139 17:07:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.139 17:07:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.139 17:07:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.139 17:07:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.139 17:07:32 -- common/autotest_common.sh@10 -- # set +x 00:15:17.139 [2024-07-20 17:07:32.851063] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:17.139 [2024-07-20 17:07:32.851166] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.139 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.139 [2024-07-20 17:07:32.912667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.139 [2024-07-20 17:07:32.999345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.139 [2024-07-20 17:07:32.999496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.139 [2024-07-20 17:07:32.999529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.139 [2024-07-20 17:07:32.999542] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.139 [2024-07-20 17:07:32.999698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.139 [2024-07-20 17:07:32.999764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.139 [2024-07-20 17:07:32.999819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.139 [2024-07-20 17:07:32.999822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.701 17:07:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.701 17:07:33 -- common/autotest_common.sh@852 -- # return 0 00:15:17.701 17:07:33 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:19.069 17:07:34 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:19.069 17:07:35 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:19.069 17:07:35 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:19.069 17:07:35 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.069 17:07:35 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:19.069 17:07:35 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.331 Malloc1 00:15:19.331 17:07:35 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:19.588 17:07:35 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:19.844 17:07:35 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:20.100 17:07:36 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.100 17:07:36 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:20.100 17:07:36 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:20.358 Malloc2 00:15:20.358 17:07:36 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:20.615 17:07:36 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:20.871 17:07:36 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.130 17:07:37 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:21.130 17:07:37 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:21.130 17:07:37 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.130 17:07:37 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.130 17:07:37 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.130 17:07:37 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.130 [2024-07-20 17:07:37.106332] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:21.130 [2024-07-20 17:07:37.106373] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512325 ] 00:15:21.130 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.130 [2024-07-20 17:07:37.141164] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:21.130 [2024-07-20 17:07:37.149207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.130 [2024-07-20 17:07:37.149235] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f394556e000 00:15:21.131 [2024-07-20 17:07:37.150218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.151206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.152241] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.153212] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.155803] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.156226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.157233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.158237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.131 [2024-07-20 17:07:37.159247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.131 [2024-07-20 17:07:37.159266] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3944322000 00:15:21.131 [2024-07-20 17:07:37.160381] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.131 [2024-07-20 17:07:37.180040] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:21.131 [2024-07-20 17:07:37.180078] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:21.131 [2024-07-20 17:07:37.182377] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.131 [2024-07-20 17:07:37.182440] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.131 [2024-07-20 17:07:37.182542] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:21.131 [2024-07-20 17:07:37.182579] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:21.131 [2024-07-20 17:07:37.182590] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:21.131 [2024-07-20 17:07:37.183365] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:21.131 [2024-07-20 17:07:37.183385] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:21.131 [2024-07-20 17:07:37.183398] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:21.131 [2024-07-20 17:07:37.184370] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.131 [2024-07-20 17:07:37.184389] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:21.131 [2024-07-20 17:07:37.184403] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.131 [2024-07-20 17:07:37.185377] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:21.131 [2024-07-20 17:07:37.185396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.131 [2024-07-20 17:07:37.186379] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:21.131 [2024-07-20 17:07:37.186397] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:21.131 [2024-07-20 17:07:37.186406] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:21.131 [2024-07-20 17:07:37.186418] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.131 [2024-07-20 17:07:37.186527] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:21.131 [2024-07-20 17:07:37.186535] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.131 [2024-07-20 17:07:37.186544] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:21.131 [2024-07-20 17:07:37.187388] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:21.131 [2024-07-20 17:07:37.188396] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:21.131 [2024-07-20 17:07:37.189401] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.131 [2024-07-20 17:07:37.190438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.131 [2024-07-20 17:07:37.191408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:21.131 [2024-07-20 17:07:37.191425] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.131 [2024-07-20 17:07:37.191438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191463] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:21.131 [2024-07-20 17:07:37.191476] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191501] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.131 [2024-07-20 17:07:37.191511] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.131 [2024-07-20 17:07:37.191534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.131 [2024-07-20 17:07:37.191603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.131 [2024-07-20 17:07:37.191620] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:21.131 [2024-07-20 17:07:37.191629] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:21.131 [2024-07-20 17:07:37.191636] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:21.131 [2024-07-20 17:07:37.191643] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.131 [2024-07-20 17:07:37.191652] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:21.131 [2024-07-20 17:07:37.191659] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:21.131 [2024-07-20 17:07:37.191666] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.131 [2024-07-20 17:07:37.191715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.131 [2024-07-20 17:07:37.191735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.131 [2024-07-20 17:07:37.191748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.131 [2024-07-20 17:07:37.191759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.131 [2024-07-20 17:07:37.191771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.131 [2024-07-20 17:07:37.191800] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191817] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.131 [2024-07-20 17:07:37.191844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.131 [2024-07-20 17:07:37.191859] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:21.131 [2024-07-20 17:07:37.191868] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191901] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.191915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.131 [2024-07-20 17:07:37.191929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.131 [2024-07-20 17:07:37.191997] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.192012] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.192026] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.131 [2024-07-20 17:07:37.192034] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.131 [2024-07-20 17:07:37.192044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.131 [2024-07-20 17:07:37.192062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.131 [2024-07-20 17:07:37.192102] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:21.131 [2024-07-20 17:07:37.192118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.192134] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.131 [2024-07-20 17:07:37.192145] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.132 [2024-07-20 17:07:37.192153] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.132 [2024-07-20 17:07:37.192163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192221] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192233] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.132 [2024-07-20 17:07:37.192241] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.132 [2024-07-20 17:07:37.192250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192283] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192295] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192309] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192320] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192328] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192337] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.132 [2024-07-20 17:07:37.192345] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:21.132 [2024-07-20 17:07:37.192353] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:21.132 [2024-07-20 17:07:37.192381] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192498] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.132 [2024-07-20 17:07:37.192507] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.132 [2024-07-20 17:07:37.192513] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.132 [2024-07-20 17:07:37.192519] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.132 [2024-07-20 17:07:37.192529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.132 [2024-07-20 17:07:37.192540] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.132 [2024-07-20 17:07:37.192548] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.132 [2024-07-20 17:07:37.192557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192567] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.132 [2024-07-20 17:07:37.192575] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.132 [2024-07-20 17:07:37.192584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192599] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.132 [2024-07-20 17:07:37.192608] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.132 [2024-07-20 17:07:37.192617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.132 [2024-07-20 17:07:37.192628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.132 [2024-07-20 17:07:37.192674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.132 ===================================================== 00:15:21.132 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.132 ===================================================== 00:15:21.132 Controller Capabilities/Features 00:15:21.132 ================================ 00:15:21.132 Vendor ID: 4e58 00:15:21.132 Subsystem Vendor ID: 4e58 00:15:21.132 Serial Number: SPDK1 00:15:21.132 Model Number: SPDK bdev Controller 00:15:21.132 Firmware Version: 24.01.1 00:15:21.132 Recommended Arb Burst: 6 00:15:21.132 IEEE OUI Identifier: 8d 6b 50 00:15:21.132 Multi-path I/O 00:15:21.132 May have multiple subsystem ports: Yes 00:15:21.132 May have multiple controllers: Yes 00:15:21.132 Associated with SR-IOV VF: No 00:15:21.132 Max Data Transfer Size: 131072 00:15:21.132 Max Number of Namespaces: 32 00:15:21.132 Max Number of I/O Queues: 127 00:15:21.132 NVMe Specification Version (VS): 1.3 00:15:21.132 NVMe Specification Version (Identify): 1.3 00:15:21.132 Maximum Queue Entries: 256 00:15:21.132 Contiguous Queues Required: Yes 00:15:21.132 Arbitration Mechanisms Supported 00:15:21.132 Weighted Round Robin: Not Supported 00:15:21.132 Vendor Specific: Not Supported 00:15:21.132 Reset Timeout: 15000 ms 00:15:21.132 Doorbell Stride: 4 bytes 00:15:21.132 NVM Subsystem Reset: Not Supported 00:15:21.132 Command Sets Supported 00:15:21.132 NVM Command Set: Supported 00:15:21.132 Boot Partition: Not Supported 00:15:21.132 Memory Page Size Minimum: 4096 bytes 00:15:21.132 Memory Page Size Maximum: 4096 bytes 00:15:21.132 Persistent Memory Region: Not Supported 00:15:21.132 Optional Asynchronous Events Supported 00:15:21.132 Namespace Attribute Notices: Supported 00:15:21.132 Firmware Activation Notices: Not Supported 00:15:21.132 ANA Change Notices: Not Supported 00:15:21.132 PLE Aggregate Log Change Notices: Not Supported 00:15:21.132 LBA Status Info Alert Notices: Not Supported 00:15:21.132 EGE Aggregate Log Change Notices: Not Supported 00:15:21.132 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.132 Zone Descriptor Change Notices: Not Supported 00:15:21.132 Discovery Log Change Notices: Not Supported 00:15:21.132 Controller Attributes 00:15:21.132 128-bit Host Identifier: Supported 00:15:21.132 Non-Operational Permissive Mode: Not Supported 00:15:21.132 NVM Sets: Not Supported 00:15:21.132 Read Recovery Levels: Not Supported 00:15:21.132 Endurance Groups: Not Supported 00:15:21.132 Predictable Latency Mode: Not Supported 00:15:21.132 Traffic Based Keep ALive: Not Supported 00:15:21.132 Namespace Granularity: Not Supported 00:15:21.132 SQ Associations: Not Supported 00:15:21.132 UUID List: Not Supported 00:15:21.132 Multi-Domain Subsystem: Not Supported 00:15:21.132 Fixed Capacity Management: Not Supported 00:15:21.132 Variable Capacity Management: Not Supported 00:15:21.132 Delete Endurance Group: Not Supported 00:15:21.132 Delete NVM Set: Not Supported 00:15:21.132 Extended LBA Formats Supported: Not Supported 00:15:21.132 Flexible Data Placement Supported: Not Supported 00:15:21.132 00:15:21.132 Controller Memory Buffer Support 00:15:21.132 ================================ 00:15:21.132 Supported: No 00:15:21.132 00:15:21.132 Persistent Memory Region Support 00:15:21.132 ================================ 00:15:21.132 Supported: No 00:15:21.132 00:15:21.132 Admin Command Set Attributes 00:15:21.132 ============================ 00:15:21.132 Security Send/Receive: Not Supported 00:15:21.132 Format NVM: Not Supported 00:15:21.132 Firmware Activate/Download: Not Supported 00:15:21.132 Namespace Management: Not Supported 00:15:21.132 Device Self-Test: Not Supported 00:15:21.132 Directives: Not Supported 00:15:21.132 NVMe-MI: Not Supported 00:15:21.132 Virtualization Management: Not Supported 00:15:21.132 Doorbell Buffer Config: Not Supported 00:15:21.132 Get LBA Status Capability: Not Supported 00:15:21.132 Command & Feature Lockdown Capability: Not Supported 00:15:21.132 Abort Command Limit: 4 00:15:21.132 Async Event Request Limit: 4 00:15:21.132 Number of Firmware Slots: N/A 00:15:21.132 Firmware Slot 1 Read-Only: N/A 00:15:21.132 Firmware Activation Without Reset: N/A 00:15:21.132 Multiple Update Detection Support: N/A 00:15:21.132 Firmware Update Granularity: No Information Provided 00:15:21.132 Per-Namespace SMART Log: No 00:15:21.133 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.133 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:21.133 Command Effects Log Page: Supported 00:15:21.133 Get Log Page Extended Data: Supported 00:15:21.133 Telemetry Log Pages: Not Supported 00:15:21.133 Persistent Event Log Pages: Not Supported 00:15:21.133 Supported Log Pages Log Page: May Support 00:15:21.133 Commands Supported & Effects Log Page: Not Supported 00:15:21.133 Feature Identifiers & Effects Log Page:May Support 00:15:21.133 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.133 Data Area 4 for Telemetry Log: Not Supported 00:15:21.133 Error Log Page Entries Supported: 128 00:15:21.133 Keep Alive: Supported 00:15:21.133 Keep Alive Granularity: 10000 ms 00:15:21.133 00:15:21.133 NVM Command Set Attributes 00:15:21.133 ========================== 00:15:21.133 Submission Queue Entry Size 00:15:21.133 Max: 64 00:15:21.133 Min: 64 00:15:21.133 Completion Queue Entry Size 00:15:21.133 Max: 16 00:15:21.133 Min: 16 00:15:21.133 Number of Namespaces: 32 00:15:21.133 Compare Command: Supported 00:15:21.133 Write Uncorrectable Command: Not Supported 00:15:21.133 Dataset Management Command: Supported 00:15:21.133 Write Zeroes Command: Supported 00:15:21.133 Set Features Save Field: Not Supported 00:15:21.133 Reservations: Not Supported 00:15:21.133 Timestamp: Not Supported 00:15:21.133 Copy: Supported 00:15:21.133 Volatile Write Cache: Present 00:15:21.133 Atomic Write Unit (Normal): 1 00:15:21.133 Atomic Write Unit (PFail): 1 00:15:21.133 Atomic Compare & Write Unit: 1 00:15:21.133 Fused Compare & Write: Supported 00:15:21.133 Scatter-Gather List 00:15:21.133 SGL Command Set: Supported (Dword aligned) 00:15:21.133 SGL Keyed: Not Supported 00:15:21.133 SGL Bit Bucket Descriptor: Not Supported 00:15:21.133 SGL Metadata Pointer: Not Supported 00:15:21.133 Oversized SGL: Not Supported 00:15:21.133 SGL Metadata Address: Not Supported 00:15:21.133 SGL Offset: Not Supported 00:15:21.133 Transport SGL Data Block: Not Supported 00:15:21.133 Replay Protected Memory Block: Not Supported 00:15:21.133 00:15:21.133 Firmware Slot Information 00:15:21.133 ========================= 00:15:21.133 Active slot: 1 00:15:21.133 Slot 1 Firmware Revision: 24.01.1 00:15:21.133 00:15:21.133 00:15:21.133 Commands Supported and Effects 00:15:21.133 ============================== 00:15:21.133 Admin Commands 00:15:21.133 -------------- 00:15:21.133 Get Log Page (02h): Supported 00:15:21.133 Identify (06h): Supported 00:15:21.133 Abort (08h): Supported 00:15:21.133 Set Features (09h): Supported 00:15:21.133 Get Features (0Ah): Supported 00:15:21.133 Asynchronous Event Request (0Ch): Supported 00:15:21.133 Keep Alive (18h): Supported 00:15:21.133 I/O Commands 00:15:21.133 ------------ 00:15:21.133 Flush (00h): Supported LBA-Change 00:15:21.133 Write (01h): Supported LBA-Change 00:15:21.133 Read (02h): Supported 00:15:21.133 Compare (05h): Supported 00:15:21.133 Write Zeroes (08h): Supported LBA-Change 00:15:21.133 Dataset Management (09h): Supported LBA-Change 00:15:21.133 Copy (19h): Supported LBA-Change 00:15:21.133 Unknown (79h): Supported LBA-Change 00:15:21.133 Unknown (7Ah): Supported 00:15:21.133 00:15:21.133 Error Log 00:15:21.133 ========= 00:15:21.133 00:15:21.133 Arbitration 00:15:21.133 =========== 00:15:21.133 Arbitration Burst: 1 00:15:21.133 00:15:21.133 Power Management 00:15:21.133 ================ 00:15:21.133 Number of Power States: 1 00:15:21.133 Current Power State: Power State #0 00:15:21.133 Power State #0: 00:15:21.133 Max Power: 0.00 W 00:15:21.133 Non-Operational State: Operational 00:15:21.133 Entry Latency: Not Reported 00:15:21.133 Exit Latency: Not Reported 00:15:21.133 Relative Read Throughput: 0 00:15:21.133 Relative Read Latency: 0 00:15:21.133 Relative Write Throughput: 0 00:15:21.133 Relative Write Latency: 0 00:15:21.133 Idle Power: Not Reported 00:15:21.133 Active Power: Not Reported 00:15:21.133 Non-Operational Permissive Mode: Not Supported 00:15:21.133 00:15:21.133 Health Information 00:15:21.133 ================== 00:15:21.133 Critical Warnings: 00:15:21.133 Available Spare Space: OK 00:15:21.133 Temperature: OK 00:15:21.133 Device Reliability: OK 00:15:21.133 Read Only: No 00:15:21.133 Volatile Memory Backup: OK 00:15:21.133 Current Temperature: 0 Kelvin[2024-07-20 17:07:37.192822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.133 [2024-07-20 17:07:37.192840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.133 [2024-07-20 17:07:37.192880] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:21.133 [2024-07-20 17:07:37.192898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.133 [2024-07-20 17:07:37.192909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.133 [2024-07-20 17:07:37.192920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.133 [2024-07-20 17:07:37.192930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.133 [2024-07-20 17:07:37.193421] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.133 [2024-07-20 17:07:37.193443] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:21.133 [2024-07-20 17:07:37.194462] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:21.133 [2024-07-20 17:07:37.194475] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:21.133 [2024-07-20 17:07:37.195425] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:21.133 [2024-07-20 17:07:37.195447] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:21.133 [2024-07-20 17:07:37.195504] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:21.133 [2024-07-20 17:07:37.200803] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.133 (-273 Celsius) 00:15:21.133 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.133 Available Spare: 0% 00:15:21.133 Available Spare Threshold: 0% 00:15:21.133 Life Percentage Used: 0% 00:15:21.133 Data Units Read: 0 00:15:21.133 Data Units Written: 0 00:15:21.133 Host Read Commands: 0 00:15:21.133 Host Write Commands: 0 00:15:21.133 Controller Busy Time: 0 minutes 00:15:21.133 Power Cycles: 0 00:15:21.133 Power On Hours: 0 hours 00:15:21.133 Unsafe Shutdowns: 0 00:15:21.133 Unrecoverable Media Errors: 0 00:15:21.133 Lifetime Error Log Entries: 0 00:15:21.133 Warning Temperature Time: 0 minutes 00:15:21.133 Critical Temperature Time: 0 minutes 00:15:21.133 00:15:21.133 Number of Queues 00:15:21.133 ================ 00:15:21.133 Number of I/O Submission Queues: 127 00:15:21.133 Number of I/O Completion Queues: 127 00:15:21.133 00:15:21.133 Active Namespaces 00:15:21.133 ================= 00:15:21.133 Namespace ID:1 00:15:21.133 Error Recovery Timeout: Unlimited 00:15:21.133 Command Set Identifier: NVM (00h) 00:15:21.133 Deallocate: Supported 00:15:21.133 Deallocated/Unwritten Error: Not Supported 00:15:21.133 Deallocated Read Value: Unknown 00:15:21.133 Deallocate in Write Zeroes: Not Supported 00:15:21.133 Deallocated Guard Field: 0xFFFF 00:15:21.133 Flush: Supported 00:15:21.133 Reservation: Supported 00:15:21.133 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.133 Size (in LBAs): 131072 (0GiB) 00:15:21.133 Capacity (in LBAs): 131072 (0GiB) 00:15:21.133 Utilization (in LBAs): 131072 (0GiB) 00:15:21.133 NGUID: E44029E14D374089844FF2BEBD2A5B05 00:15:21.133 UUID: e44029e1-4d37-4089-844f-f2bebd2a5b05 00:15:21.133 Thin Provisioning: Not Supported 00:15:21.133 Per-NS Atomic Units: Yes 00:15:21.133 Atomic Boundary Size (Normal): 0 00:15:21.133 Atomic Boundary Size (PFail): 0 00:15:21.133 Atomic Boundary Offset: 0 00:15:21.133 Maximum Single Source Range Length: 65535 00:15:21.133 Maximum Copy Length: 65535 00:15:21.133 Maximum Source Range Count: 1 00:15:21.133 NGUID/EUI64 Never Reused: No 00:15:21.133 Namespace Write Protected: No 00:15:21.133 Number of LBA Formats: 1 00:15:21.133 Current LBA Format: LBA Format #00 00:15:21.133 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.133 00:15:21.133 17:07:37 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.133 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.390 Initializing NVMe Controllers 00:15:26.390 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:26.390 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:26.390 Initialization complete. Launching workers. 00:15:26.390 ======================================================== 00:15:26.390 Latency(us) 00:15:26.390 Device Information : IOPS MiB/s Average min max 00:15:26.390 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36583.79 142.91 3499.53 1157.82 8507.49 00:15:26.390 ======================================================== 00:15:26.390 Total : 36583.79 142.91 3499.53 1157.82 8507.49 00:15:26.390 00:15:26.390 17:07:42 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:26.390 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.654 Initializing NVMe Controllers 00:15:31.654 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.654 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:31.654 Initialization complete. Launching workers. 00:15:31.654 ======================================================== 00:15:31.654 Latency(us) 00:15:31.654 Device Information : IOPS MiB/s Average min max 00:15:31.654 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16012.40 62.55 7999.05 6941.97 14972.48 00:15:31.654 ======================================================== 00:15:31.654 Total : 16012.40 62.55 7999.05 6941.97 14972.48 00:15:31.654 00:15:31.654 17:07:47 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:31.911 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.175 Initializing NVMe Controllers 00:15:37.175 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.175 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:37.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:37.175 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:37.175 Initialization complete. Launching workers. 00:15:37.175 Starting thread on core 2 00:15:37.175 Starting thread on core 3 00:15:37.175 Starting thread on core 1 00:15:37.175 17:07:53 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:37.175 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.370 Initializing NVMe Controllers 00:15:41.370 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.370 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.370 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:41.370 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:41.370 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:41.370 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:41.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.370 Initialization complete. Launching workers. 00:15:41.370 Starting thread on core 1 with urgent priority queue 00:15:41.370 Starting thread on core 2 with urgent priority queue 00:15:41.370 Starting thread on core 3 with urgent priority queue 00:15:41.370 Starting thread on core 0 with urgent priority queue 00:15:41.370 SPDK bdev Controller (SPDK1 ) core 0: 5462.00 IO/s 18.31 secs/100000 ios 00:15:41.370 SPDK bdev Controller (SPDK1 ) core 1: 5229.67 IO/s 19.12 secs/100000 ios 00:15:41.370 SPDK bdev Controller (SPDK1 ) core 2: 5725.00 IO/s 17.47 secs/100000 ios 00:15:41.370 SPDK bdev Controller (SPDK1 ) core 3: 5698.67 IO/s 17.55 secs/100000 ios 00:15:41.370 ======================================================== 00:15:41.370 00:15:41.370 17:07:57 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.370 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.370 Initializing NVMe Controllers 00:15:41.370 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.370 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:41.370 Namespace ID: 1 size: 0GB 00:15:41.370 Initialization complete. 00:15:41.370 INFO: using host memory buffer for IO 00:15:41.370 Hello world! 00:15:41.370 17:07:57 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:41.370 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.788 Initializing NVMe Controllers 00:15:42.788 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.788 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.788 Initialization complete. Launching workers. 00:15:42.788 submit (in ns) avg, min, max = 6440.7, 3444.4, 4013621.1 00:15:42.788 complete (in ns) avg, min, max = 27390.7, 2033.3, 4015566.7 00:15:42.788 00:15:42.788 Submit histogram 00:15:42.788 ================ 00:15:42.788 Range in us Cumulative Count 00:15:42.788 3.437 - 3.461: 0.5716% ( 79) 00:15:42.788 3.461 - 3.484: 2.1777% ( 222) 00:15:42.788 3.484 - 3.508: 6.1930% ( 555) 00:15:42.788 3.508 - 3.532: 13.0734% ( 951) 00:15:42.788 3.532 - 3.556: 23.1877% ( 1398) 00:15:42.788 3.556 - 3.579: 32.7738% ( 1325) 00:15:42.788 3.579 - 3.603: 40.1896% ( 1025) 00:15:42.788 3.603 - 3.627: 47.4316% ( 1001) 00:15:42.788 3.627 - 3.650: 55.1295% ( 1064) 00:15:42.788 3.650 - 3.674: 60.9753% ( 808) 00:15:42.789 3.674 - 3.698: 65.4898% ( 624) 00:15:42.789 3.698 - 3.721: 68.5067% ( 417) 00:15:42.789 3.721 - 3.745: 71.1547% ( 366) 00:15:42.789 3.745 - 3.769: 74.6708% ( 486) 00:15:42.789 3.769 - 3.793: 78.4691% ( 525) 00:15:42.789 3.793 - 3.816: 81.7176% ( 449) 00:15:42.789 3.816 - 3.840: 84.6477% ( 405) 00:15:42.789 3.840 - 3.864: 87.2233% ( 356) 00:15:42.789 3.864 - 3.887: 89.2273% ( 277) 00:15:42.789 3.887 - 3.911: 90.9130% ( 233) 00:15:42.789 3.911 - 3.935: 92.2877% ( 190) 00:15:42.789 3.935 - 3.959: 93.3874% ( 152) 00:15:42.789 3.959 - 3.982: 94.3858% ( 138) 00:15:42.789 3.982 - 4.006: 95.2033% ( 113) 00:15:42.789 4.006 - 4.030: 95.7821% ( 80) 00:15:42.789 4.030 - 4.053: 96.2668% ( 67) 00:15:42.789 4.053 - 4.077: 96.5707% ( 42) 00:15:42.789 4.077 - 4.101: 96.7299% ( 22) 00:15:42.789 4.101 - 4.124: 96.8818% ( 21) 00:15:42.789 4.124 - 4.148: 97.0482% ( 23) 00:15:42.789 4.148 - 4.172: 97.1205% ( 10) 00:15:42.789 4.172 - 4.196: 97.1639% ( 6) 00:15:42.789 4.196 - 4.219: 97.2725% ( 15) 00:15:42.789 4.219 - 4.243: 97.3448% ( 10) 00:15:42.789 4.243 - 4.267: 97.3955% ( 7) 00:15:42.789 4.267 - 4.290: 97.4316% ( 5) 00:15:42.789 4.290 - 4.314: 97.5257% ( 13) 00:15:42.789 4.314 - 4.338: 97.5619% ( 5) 00:15:42.789 4.338 - 4.361: 97.5763% ( 2) 00:15:42.789 4.361 - 4.385: 97.6125% ( 5) 00:15:42.789 4.385 - 4.409: 97.6342% ( 3) 00:15:42.789 4.433 - 4.456: 97.6487% ( 2) 00:15:42.789 4.456 - 4.480: 97.6559% ( 1) 00:15:42.789 4.480 - 4.504: 97.6631% ( 1) 00:15:42.789 4.551 - 4.575: 97.6704% ( 1) 00:15:42.789 4.575 - 4.599: 97.6776% ( 1) 00:15:42.789 4.599 - 4.622: 97.6921% ( 2) 00:15:42.789 4.622 - 4.646: 97.6993% ( 1) 00:15:42.789 4.646 - 4.670: 97.7427% ( 6) 00:15:42.789 4.670 - 4.693: 97.7644% ( 3) 00:15:42.789 4.693 - 4.717: 97.7934% ( 4) 00:15:42.789 4.717 - 4.741: 97.8223% ( 4) 00:15:42.789 4.741 - 4.764: 97.8730% ( 7) 00:15:42.789 4.764 - 4.788: 97.9236% ( 7) 00:15:42.789 4.788 - 4.812: 97.9598% ( 5) 00:15:42.789 4.812 - 4.836: 97.9959% ( 5) 00:15:42.789 4.836 - 4.859: 98.0321% ( 5) 00:15:42.789 4.859 - 4.883: 98.0828% ( 7) 00:15:42.789 4.883 - 4.907: 98.1045% ( 3) 00:15:42.789 4.907 - 4.930: 98.1334% ( 4) 00:15:42.789 4.930 - 4.954: 98.1623% ( 4) 00:15:42.789 4.954 - 4.978: 98.1841% ( 3) 00:15:42.789 4.978 - 5.001: 98.2202% ( 5) 00:15:42.789 5.001 - 5.025: 98.2419% ( 3) 00:15:42.789 5.025 - 5.049: 98.2564% ( 2) 00:15:42.789 5.049 - 5.073: 98.2636% ( 1) 00:15:42.789 5.073 - 5.096: 98.2998% ( 5) 00:15:42.789 5.096 - 5.120: 98.3143% ( 2) 00:15:42.789 5.120 - 5.144: 98.3432% ( 4) 00:15:42.789 5.144 - 5.167: 98.3577% ( 2) 00:15:42.789 5.167 - 5.191: 98.3722% ( 2) 00:15:42.789 5.191 - 5.215: 98.3794% ( 1) 00:15:42.789 5.262 - 5.286: 98.3866% ( 1) 00:15:42.789 5.286 - 5.310: 98.3939% ( 1) 00:15:42.789 5.310 - 5.333: 98.4156% ( 3) 00:15:42.789 5.333 - 5.357: 98.4300% ( 2) 00:15:42.789 5.357 - 5.381: 98.4373% ( 1) 00:15:42.789 5.404 - 5.428: 98.4445% ( 1) 00:15:42.789 5.499 - 5.523: 98.4517% ( 1) 00:15:42.789 5.618 - 5.641: 98.4590% ( 1) 00:15:42.789 5.784 - 5.807: 98.4734% ( 2) 00:15:42.789 5.926 - 5.950: 98.4807% ( 1) 00:15:42.789 6.021 - 6.044: 98.4879% ( 1) 00:15:42.789 6.068 - 6.116: 98.4952% ( 1) 00:15:42.789 6.305 - 6.353: 98.5024% ( 1) 00:15:42.789 6.684 - 6.732: 98.5096% ( 1) 00:15:42.789 6.732 - 6.779: 98.5169% ( 1) 00:15:42.789 6.827 - 6.874: 98.5241% ( 1) 00:15:42.789 6.874 - 6.921: 98.5313% ( 1) 00:15:42.789 6.921 - 6.969: 98.5386% ( 1) 00:15:42.789 7.064 - 7.111: 98.5530% ( 2) 00:15:42.789 7.159 - 7.206: 98.5603% ( 1) 00:15:42.789 7.253 - 7.301: 98.5747% ( 2) 00:15:42.789 7.348 - 7.396: 98.5820% ( 1) 00:15:42.789 7.538 - 7.585: 98.6037% ( 3) 00:15:42.789 7.680 - 7.727: 98.6109% ( 1) 00:15:42.789 7.727 - 7.775: 98.6254% ( 2) 00:15:42.789 7.775 - 7.822: 98.6471% ( 3) 00:15:42.789 7.822 - 7.870: 98.6760% ( 4) 00:15:42.789 7.870 - 7.917: 98.6905% ( 2) 00:15:42.789 7.917 - 7.964: 98.7050% ( 2) 00:15:42.789 8.107 - 8.154: 98.7267% ( 3) 00:15:42.789 8.154 - 8.201: 98.7556% ( 4) 00:15:42.789 8.201 - 8.249: 98.7628% ( 1) 00:15:42.789 8.249 - 8.296: 98.7773% ( 2) 00:15:42.789 8.296 - 8.344: 98.7845% ( 1) 00:15:42.789 8.391 - 8.439: 98.7990% ( 2) 00:15:42.789 8.486 - 8.533: 98.8063% ( 1) 00:15:42.789 8.628 - 8.676: 98.8135% ( 1) 00:15:42.789 8.676 - 8.723: 98.8207% ( 1) 00:15:42.789 8.818 - 8.865: 98.8424% ( 3) 00:15:42.789 9.007 - 9.055: 98.8497% ( 1) 00:15:42.789 9.434 - 9.481: 98.8569% ( 1) 00:15:42.789 9.529 - 9.576: 98.8714% ( 2) 00:15:42.789 9.576 - 9.624: 98.8858% ( 2) 00:15:42.789 9.719 - 9.766: 98.8931% ( 1) 00:15:42.789 9.813 - 9.861: 98.9003% ( 1) 00:15:42.789 9.908 - 9.956: 98.9075% ( 1) 00:15:42.789 10.050 - 10.098: 98.9148% ( 1) 00:15:42.789 10.524 - 10.572: 98.9220% ( 1) 00:15:42.789 10.667 - 10.714: 98.9292% ( 1) 00:15:42.789 10.809 - 10.856: 98.9437% ( 2) 00:15:42.789 10.856 - 10.904: 98.9582% ( 2) 00:15:42.789 11.236 - 11.283: 98.9654% ( 1) 00:15:42.789 11.378 - 11.425: 98.9799% ( 2) 00:15:42.789 11.425 - 11.473: 98.9871% ( 1) 00:15:42.789 11.473 - 11.520: 98.9944% ( 1) 00:15:42.789 11.615 - 11.662: 99.0016% ( 1) 00:15:42.789 11.662 - 11.710: 99.0088% ( 1) 00:15:42.789 11.804 - 11.852: 99.0161% ( 1) 00:15:42.789 11.899 - 11.947: 99.0233% ( 1) 00:15:42.789 12.231 - 12.326: 99.0305% ( 1) 00:15:42.789 12.516 - 12.610: 99.0378% ( 1) 00:15:42.789 12.610 - 12.705: 99.0450% ( 1) 00:15:42.789 12.800 - 12.895: 99.0522% ( 1) 00:15:42.789 12.895 - 12.990: 99.0595% ( 1) 00:15:42.789 13.084 - 13.179: 99.0667% ( 1) 00:15:42.789 13.274 - 13.369: 99.0739% ( 1) 00:15:42.789 13.653 - 13.748: 99.0812% ( 1) 00:15:42.789 14.127 - 14.222: 99.0884% ( 1) 00:15:42.789 14.412 - 14.507: 99.0956% ( 1) 00:15:42.789 14.601 - 14.696: 99.1101% ( 2) 00:15:42.789 14.981 - 15.076: 99.1173% ( 1) 00:15:42.789 15.170 - 15.265: 99.1246% ( 1) 00:15:42.789 15.360 - 15.455: 99.1318% ( 1) 00:15:42.789 16.972 - 17.067: 99.1391% ( 1) 00:15:42.789 17.067 - 17.161: 99.1463% ( 1) 00:15:42.789 17.256 - 17.351: 99.1752% ( 4) 00:15:42.789 17.351 - 17.446: 99.1897% ( 2) 00:15:42.789 17.446 - 17.541: 99.2259% ( 5) 00:15:42.789 17.541 - 17.636: 99.2620% ( 5) 00:15:42.789 17.636 - 17.730: 99.2838% ( 3) 00:15:42.789 17.730 - 17.825: 99.3199% ( 5) 00:15:42.789 17.825 - 17.920: 99.3706% ( 7) 00:15:42.789 17.920 - 18.015: 99.3923% ( 3) 00:15:42.789 18.015 - 18.110: 99.4502% ( 8) 00:15:42.789 18.110 - 18.204: 99.5008% ( 7) 00:15:42.789 18.204 - 18.299: 99.5297% ( 4) 00:15:42.789 18.299 - 18.394: 99.5804% ( 7) 00:15:42.789 18.394 - 18.489: 99.6093% ( 4) 00:15:42.789 18.489 - 18.584: 99.6672% ( 8) 00:15:42.789 18.584 - 18.679: 99.7106% ( 6) 00:15:42.789 18.679 - 18.773: 99.7468% ( 5) 00:15:42.789 18.868 - 18.963: 99.7685% ( 3) 00:15:42.789 18.963 - 19.058: 99.7902% ( 3) 00:15:42.789 19.058 - 19.153: 99.8119% ( 3) 00:15:42.789 19.153 - 19.247: 99.8336% ( 3) 00:15:42.789 19.342 - 19.437: 99.8481% ( 2) 00:15:42.789 19.532 - 19.627: 99.8625% ( 2) 00:15:42.789 19.721 - 19.816: 99.8698% ( 1) 00:15:42.789 19.911 - 20.006: 99.8770% ( 1) 00:15:42.789 20.101 - 20.196: 99.8842% ( 1) 00:15:42.789 21.428 - 21.523: 99.8915% ( 1) 00:15:42.789 21.713 - 21.807: 99.8987% ( 1) 00:15:42.789 22.661 - 22.756: 99.9059% ( 1) 00:15:42.789 23.988 - 24.083: 99.9132% ( 1) 00:15:42.789 25.031 - 25.221: 99.9204% ( 1) 00:15:42.789 26.738 - 26.927: 99.9277% ( 1) 00:15:42.789 29.013 - 29.203: 99.9349% ( 1) 00:15:42.789 3980.705 - 4004.978: 99.9928% ( 8) 00:15:42.789 4004.978 - 4029.250: 100.0000% ( 1) 00:15:42.789 00:15:42.789 Complete histogram 00:15:42.789 ================== 00:15:42.789 Range in us Cumulative Count 00:15:42.789 2.027 - 2.039: 0.3545% ( 49) 00:15:42.789 2.039 - 2.050: 19.2519% ( 2612) 00:15:42.789 2.050 - 2.062: 30.6034% ( 1569) 00:15:42.789 2.062 - 2.074: 34.4957% ( 538) 00:15:42.789 2.074 - 2.086: 57.6328% ( 3198) 00:15:42.789 2.086 - 2.098: 63.8403% ( 858) 00:15:42.789 2.098 - 2.110: 66.4231% ( 357) 00:15:42.789 2.110 - 2.121: 73.0719% ( 919) 00:15:42.789 2.121 - 2.133: 74.3959% ( 183) 00:15:42.789 2.133 - 2.145: 78.6210% ( 584) 00:15:42.789 2.145 - 2.157: 87.3897% ( 1212) 00:15:42.789 2.157 - 2.169: 89.3141% ( 266) 00:15:42.789 2.169 - 2.181: 90.7900% ( 204) 00:15:42.789 2.181 - 2.193: 92.0923% ( 180) 00:15:42.789 2.193 - 2.204: 92.6639% ( 79) 00:15:42.789 2.204 - 2.216: 94.0530% ( 192) 00:15:42.790 2.216 - 2.228: 95.3697% ( 182) 00:15:42.790 2.228 - 2.240: 95.5795% ( 29) 00:15:42.790 2.240 - 2.252: 95.7676% ( 26) 00:15:42.790 2.252 - 2.264: 95.8544% ( 12) 00:15:42.790 2.264 - 2.276: 95.9630% ( 15) 00:15:42.790 2.276 - 2.287: 96.1800% ( 30) 00:15:42.790 2.287 - 2.299: 96.3175% ( 19) 00:15:42.790 2.299 - 2.311: 96.4405% ( 17) 00:15:42.790 2.311 - 2.323: 96.6286% ( 26) 00:15:42.790 2.323 - 2.335: 96.8456% ( 30) 00:15:42.790 2.335 - 2.347: 97.0988% ( 35) 00:15:42.790 2.347 - 2.359: 97.4606% ( 50) 00:15:42.790 2.359 - 2.370: 97.7138% ( 35) 00:15:42.790 2.370 - 2.382: 97.8874% ( 24) 00:15:42.790 2.382 - 2.394: 98.0538% ( 23) 00:15:42.790 2.394 - 2.406: 98.1334% ( 11) 00:15:42.790 2.406 - 2.418: 98.1696% ( 5) 00:15:42.790 2.418 - 2.430: 98.2275% ( 8) 00:15:42.790 2.430 - 2.441: 98.2636% ( 5) 00:15:42.790 2.441 - 2.453: 98.2853% ( 3) 00:15:42.790 2.453 - 2.465: 98.2998% ( 2) 00:15:42.790 2.465 - 2.477: 98.3288% ( 4) 00:15:42.790 2.477 - 2.489: 98.3505% ( 3) 00:15:42.790 2.489 - 2.501: 98.3577% ( 1) 00:15:42.790 2.513 - 2.524: 98.3649% ( 1) 00:15:42.790 2.524 - 2.536: 98.3722% ( 1) 00:15:42.790 2.560 - 2.572: 98.3794% ( 1) 00:15:42.790 2.643 - 2.655: 98.3866% ( 1) 00:15:42.790 2.655 - 2.667: 98.3939% ( 1) 00:15:42.790 2.667 - 2.679: 98.4011% ( 1) 00:15:42.790 2.690 - 2.702: 98.4083% ( 1) 00:15:42.790 2.761 - 2.773: 98.4156% ( 1) 00:15:42.790 3.058 - 3.081: 98.4228% ( 1) 00:15:42.790 3.081 - 3.105: 98.4300% ( 1) 00:15:42.790 3.105 - 3.129: 98.4373% ( 1) 00:15:42.790 3.129 - 3.153: 98.4517% ( 2) 00:15:42.790 3.153 - 3.176: 98.4662% ( 2) 00:15:42.790 3.176 - 3.200: 98.4807% ( 2) 00:15:42.790 3.200 - 3.224: 98.4952% ( 2) 00:15:42.790 3.224 - 3.247: 98.5096% ( 2) 00:15:42.790 3.295 - 3.319: 98.5169% ( 1) 00:15:42.790 3.319 - 3.342: 98.5241% ( 1) 00:15:42.790 3.390 - 3.413: 98.5313% ( 1) 00:15:42.790 3.413 - 3.437: 98.5386% ( 1) 00:15:42.790 3.484 - 3.508: 98.5458% ( 1) 00:15:42.790 3.579 - 3.603: 98.5530% ( 1) 00:15:42.790 3.603 - 3.627: 98.5603% ( 1) 00:15:42.790 3.674 - 3.698: 98.5675% ( 1) 00:15:42.790 3.959 - 3.982: 98.5747% ( 1) 00:15:42.790 4.053 - 4.077: 98.5820% ( 1) 00:15:42.790 4.267 - 4.290: 98.5892% ( 1) 00:15:42.790 4.930 - 4.954: 98.5964% ( 1) 00:15:42.790 5.096 - 5.120: 98.6037% ( 1) 00:15:42.790 5.333 - 5.357: 98.6109% ( 1) 00:15:42.790 5.404 - 5.428: 98.6181% ( 1) 00:15:42.790 5.499 - 5.523: 98.6254% ( 1) 00:15:42.790 5.713 - 5.736: 98.6326% ( 1) 00:15:42.790 5.807 - 5.831: 98.6398% ( 1) 00:15:42.790 5.831 - 5.855: 98.6471% ( 1) 00:15:42.790 5.926 - 5.950: 98.6543% ( 1) 00:15:42.790 5.973 - 5.997: 98.6616% ( 1) 00:15:42.790 6.044 - 6.068: 98.6688% ( 1) 00:15:42.790 6.163 - 6.210: 98.6760% ( 1) 00:15:42.790 6.305 - 6.353: 98.6833% ( 1) 00:15:42.790 6.495 - 6.542: 98.6905% ( 1) 00:15:42.790 6.732 - 6.779: 98.6977% ( 1) 00:15:42.790 6.779 - 6.827: 98.7050% ( 1) 00:15:42.790 6.921 - 6.969: 98.7122% ( 1) 00:15:42.790 6.969 - 7.016: 98.7194% ( 1) 00:15:42.790 7.064 - 7.111: 98.7267% ( 1) 00:15:42.790 7.111 - 7.159: 98.7339% ( 1) 00:15:42.790 7.206 - 7.253: 98.7484% ( 2) 00:15:42.790 7.301 - 7.348: 98.7628% ( 2) 00:15:42.790 7.490 - 7.538: 98.7701% ( 1) 00:15:42.790 9.529 - 9.576: 98.7773% ( 1) 00:15:42.790 10.572 - 10.619: 98.7845% ( 1) 00:15:42.790 10.667 - 10.714: 98.7918% ( 1) 00:15:42.790 11.567 - 11.615: 98.7990% ( 1) 00:15:42.790 15.360 - 15.455: 98.8063% ( 1) 00:15:42.790 15.550 - 15.644: 98.8207% ( 2) 00:15:42.790 15.644 - 15.739: 98.8352% ( 2) 00:15:42.790 15.739 - 15.834: 98.8641% ( 4) 00:15:42.790 15.834 - 15.929: 98.8714% ( 1) 00:15:42.790 15.929 - 16.024: 98.8786% ( 1) 00:15:42.790 16.024 - 16.119: 98.8931% ( 2) 00:15:42.790 16.119 - 16.213: 98.9292% ( 5) 00:15:42.790 16.213 - 16.308: 98.9654% ( 5) 00:15:42.790 16.308 - 16.403: 98.9727% ( 1) 00:15:42.790 16.403 - 16.498: 99.0305% ( 8) 00:15:42.790 16.498 - 16.593: 99.1101% ( 11) 00:15:42.790 16.593 - 16.687: 99.1391% ( 4) 00:15:42.790 16.687 - 16.782: 99.1680% ( 4) 00:15:42.790 16.782 - 16.877: 99.1969% ( 4) 00:15:42.790 16.877 - 16.972: 99.2186% ( 3) 00:15:42.790 16.972 - 17.067: 99.2620% ( 6) 00:15:42.790 17.067 - 17.161: 99.2765% ( 2) 00:15:42.790 17.161 - 17.256: 99.2910% ( 2) 00:15:42.790 17.351 - 17.446: 99.2982% ( 1) 00:15:42.790 17.541 - 17.636: 99.3199% ( 3) 00:15:42.790 17.730 - 17.825: 99.3272% ( 1) 00:15:42.790 17.825 - 17.920: 99.3416% ( 2) 00:15:42.790 18.394 - 18.489: 99.3489% ( 1) 00:15:42.790 19.058 - 19.153: 99.3561% ( 1) 00:15:42.790 24.273 - 24.462: 99.3633% ( 1) 00:15:42.790 47.218 - 47.407: 99.3706% ( 1) 00:15:42.790 3980.705 - 4004.978: 99.7540% ( 53) 00:15:42.790 4004.978 - 4029.250: 100.0000% ( 34) 00:15:42.790 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.790 [2024-07-20 17:07:58.919397] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:42.790 [ 00:15:42.790 { 00:15:42.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.790 "subtype": "Discovery", 00:15:42.790 "listen_addresses": [], 00:15:42.790 "allow_any_host": true, 00:15:42.790 "hosts": [] 00:15:42.790 }, 00:15:42.790 { 00:15:42.790 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.790 "subtype": "NVMe", 00:15:42.790 "listen_addresses": [ 00:15:42.790 { 00:15:42.790 "transport": "VFIOUSER", 00:15:42.790 "trtype": "VFIOUSER", 00:15:42.790 "adrfam": "IPv4", 00:15:42.790 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.790 "trsvcid": "0" 00:15:42.790 } 00:15:42.790 ], 00:15:42.790 "allow_any_host": true, 00:15:42.790 "hosts": [], 00:15:42.790 "serial_number": "SPDK1", 00:15:42.790 "model_number": "SPDK bdev Controller", 00:15:42.790 "max_namespaces": 32, 00:15:42.790 "min_cntlid": 1, 00:15:42.790 "max_cntlid": 65519, 00:15:42.790 "namespaces": [ 00:15:42.790 { 00:15:42.790 "nsid": 1, 00:15:42.790 "bdev_name": "Malloc1", 00:15:42.790 "name": "Malloc1", 00:15:42.790 "nguid": "E44029E14D374089844FF2BEBD2A5B05", 00:15:42.790 "uuid": "e44029e1-4d37-4089-844f-f2bebd2a5b05" 00:15:42.790 } 00:15:42.790 ] 00:15:42.790 }, 00:15:42.790 { 00:15:42.790 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.790 "subtype": "NVMe", 00:15:42.790 "listen_addresses": [ 00:15:42.790 { 00:15:42.790 "transport": "VFIOUSER", 00:15:42.790 "trtype": "VFIOUSER", 00:15:42.790 "adrfam": "IPv4", 00:15:42.790 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.790 "trsvcid": "0" 00:15:42.790 } 00:15:42.790 ], 00:15:42.790 "allow_any_host": true, 00:15:42.790 "hosts": [], 00:15:42.790 "serial_number": "SPDK2", 00:15:42.790 "model_number": "SPDK bdev Controller", 00:15:42.790 "max_namespaces": 32, 00:15:42.790 "min_cntlid": 1, 00:15:42.790 "max_cntlid": 65519, 00:15:42.790 "namespaces": [ 00:15:42.790 { 00:15:42.790 "nsid": 1, 00:15:42.790 "bdev_name": "Malloc2", 00:15:42.790 "name": "Malloc2", 00:15:42.790 "nguid": "F2E59488306E40A48516FD46A83D85DE", 00:15:42.790 "uuid": "f2e59488-306e-40a4-8516-fd46a83d85de" 00:15:42.790 } 00:15:42.790 ] 00:15:42.790 } 00:15:42.790 ] 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@34 -- # aerpid=514997 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.790 17:07:58 -- common/autotest_common.sh@1244 -- # local i=0 00:15:42.790 17:07:58 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.790 17:07:58 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.790 17:07:58 -- common/autotest_common.sh@1255 -- # return 0 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.790 17:07:58 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:43.048 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.048 Malloc3 00:15:43.305 17:07:59 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:43.563 17:07:59 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.563 Asynchronous Event Request test 00:15:43.563 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.563 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:43.563 Registering asynchronous event callbacks... 00:15:43.563 Starting namespace attribute notice tests for all controllers... 00:15:43.563 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.563 aer_cb - Changed Namespace 00:15:43.563 Cleaning up... 00:15:43.822 [ 00:15:43.822 { 00:15:43.822 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.822 "subtype": "Discovery", 00:15:43.822 "listen_addresses": [], 00:15:43.822 "allow_any_host": true, 00:15:43.822 "hosts": [] 00:15:43.822 }, 00:15:43.822 { 00:15:43.822 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.822 "subtype": "NVMe", 00:15:43.822 "listen_addresses": [ 00:15:43.822 { 00:15:43.822 "transport": "VFIOUSER", 00:15:43.822 "trtype": "VFIOUSER", 00:15:43.822 "adrfam": "IPv4", 00:15:43.822 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.822 "trsvcid": "0" 00:15:43.822 } 00:15:43.822 ], 00:15:43.822 "allow_any_host": true, 00:15:43.822 "hosts": [], 00:15:43.822 "serial_number": "SPDK1", 00:15:43.822 "model_number": "SPDK bdev Controller", 00:15:43.822 "max_namespaces": 32, 00:15:43.822 "min_cntlid": 1, 00:15:43.822 "max_cntlid": 65519, 00:15:43.822 "namespaces": [ 00:15:43.822 { 00:15:43.822 "nsid": 1, 00:15:43.822 "bdev_name": "Malloc1", 00:15:43.822 "name": "Malloc1", 00:15:43.822 "nguid": "E44029E14D374089844FF2BEBD2A5B05", 00:15:43.822 "uuid": "e44029e1-4d37-4089-844f-f2bebd2a5b05" 00:15:43.822 }, 00:15:43.822 { 00:15:43.822 "nsid": 2, 00:15:43.822 "bdev_name": "Malloc3", 00:15:43.822 "name": "Malloc3", 00:15:43.822 "nguid": "059B3ED214A549FA8817A3D997AA74A0", 00:15:43.822 "uuid": "059b3ed2-14a5-49fa-8817-a3d997aa74a0" 00:15:43.822 } 00:15:43.822 ] 00:15:43.822 }, 00:15:43.822 { 00:15:43.822 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.822 "subtype": "NVMe", 00:15:43.822 "listen_addresses": [ 00:15:43.822 { 00:15:43.822 "transport": "VFIOUSER", 00:15:43.822 "trtype": "VFIOUSER", 00:15:43.822 "adrfam": "IPv4", 00:15:43.822 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.822 "trsvcid": "0" 00:15:43.822 } 00:15:43.822 ], 00:15:43.822 "allow_any_host": true, 00:15:43.822 "hosts": [], 00:15:43.822 "serial_number": "SPDK2", 00:15:43.822 "model_number": "SPDK bdev Controller", 00:15:43.822 "max_namespaces": 32, 00:15:43.822 "min_cntlid": 1, 00:15:43.822 "max_cntlid": 65519, 00:15:43.822 "namespaces": [ 00:15:43.822 { 00:15:43.822 "nsid": 1, 00:15:43.822 "bdev_name": "Malloc2", 00:15:43.822 "name": "Malloc2", 00:15:43.822 "nguid": "F2E59488306E40A48516FD46A83D85DE", 00:15:43.822 "uuid": "f2e59488-306e-40a4-8516-fd46a83d85de" 00:15:43.822 } 00:15:43.822 ] 00:15:43.822 } 00:15:43.822 ] 00:15:43.822 17:07:59 -- target/nvmf_vfio_user.sh@44 -- # wait 514997 00:15:43.822 17:07:59 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.822 17:07:59 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.822 17:07:59 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.822 17:07:59 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:43.822 [2024-07-20 17:07:59.778199] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:43.822 [2024-07-20 17:07:59.778237] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid515057 ] 00:15:43.822 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.822 [2024-07-20 17:07:59.812834] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:43.822 [2024-07-20 17:07:59.815149] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.822 [2024-07-20 17:07:59.815178] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa02e638000 00:15:43.822 [2024-07-20 17:07:59.816148] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.817146] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.818161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.819166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.820174] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.821180] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.822191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.823204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.822 [2024-07-20 17:07:59.824223] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.822 [2024-07-20 17:07:59.824258] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa02d3ec000 00:15:43.822 [2024-07-20 17:07:59.825413] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.822 [2024-07-20 17:07:59.839623] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:43.822 [2024-07-20 17:07:59.839659] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:43.822 [2024-07-20 17:07:59.844765] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.822 [2024-07-20 17:07:59.844842] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:43.822 [2024-07-20 17:07:59.844936] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:43.822 [2024-07-20 17:07:59.844962] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:43.822 [2024-07-20 17:07:59.844973] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:43.822 [2024-07-20 17:07:59.845764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:43.822 [2024-07-20 17:07:59.845815] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:43.822 [2024-07-20 17:07:59.845830] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:43.822 [2024-07-20 17:07:59.846789] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.822 [2024-07-20 17:07:59.846820] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:43.822 [2024-07-20 17:07:59.846837] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:43.822 [2024-07-20 17:07:59.847782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:43.822 [2024-07-20 17:07:59.847835] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:43.822 [2024-07-20 17:07:59.848782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:43.822 [2024-07-20 17:07:59.848823] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:43.822 [2024-07-20 17:07:59.848834] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:43.822 [2024-07-20 17:07:59.848846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:43.822 [2024-07-20 17:07:59.848956] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:43.822 [2024-07-20 17:07:59.848965] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:43.822 [2024-07-20 17:07:59.848974] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:43.822 [2024-07-20 17:07:59.849803] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:43.822 [2024-07-20 17:07:59.850800] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:43.822 [2024-07-20 17:07:59.851837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.822 [2024-07-20 17:07:59.852858] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:43.822 [2024-07-20 17:07:59.854818] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:43.822 [2024-07-20 17:07:59.854837] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:43.822 [2024-07-20 17:07:59.854847] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.854872] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:43.822 [2024-07-20 17:07:59.854887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.854908] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.822 [2024-07-20 17:07:59.854918] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.822 [2024-07-20 17:07:59.854938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.822 [2024-07-20 17:07:59.862810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:43.822 [2024-07-20 17:07:59.862835] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:43.822 [2024-07-20 17:07:59.862850] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:43.822 [2024-07-20 17:07:59.862858] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:43.822 [2024-07-20 17:07:59.862866] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:43.822 [2024-07-20 17:07:59.862876] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:43.822 [2024-07-20 17:07:59.862884] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:43.822 [2024-07-20 17:07:59.862892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.862910] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.862928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:43.822 [2024-07-20 17:07:59.870805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:43.822 [2024-07-20 17:07:59.870833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.822 [2024-07-20 17:07:59.870847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.822 [2024-07-20 17:07:59.870860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.822 [2024-07-20 17:07:59.870873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.822 [2024-07-20 17:07:59.870882] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.870897] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.870913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:43.822 [2024-07-20 17:07:59.878807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:43.822 [2024-07-20 17:07:59.878826] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:43.822 [2024-07-20 17:07:59.878835] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.878848] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.878862] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.878877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.822 [2024-07-20 17:07:59.886803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:43.822 [2024-07-20 17:07:59.886875] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.886890] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:43.822 [2024-07-20 17:07:59.886908] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:43.822 [2024-07-20 17:07:59.886918] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:43.822 [2024-07-20 17:07:59.886928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:43.822 [2024-07-20 17:07:59.894818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.894850] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:43.823 [2024-07-20 17:07:59.894870] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.894886] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.894899] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.823 [2024-07-20 17:07:59.894907] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.823 [2024-07-20 17:07:59.894918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.902803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.902833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.902851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.902865] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.823 [2024-07-20 17:07:59.902873] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.823 [2024-07-20 17:07:59.902884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.910332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.910358] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.910371] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.910385] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.910396] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.910404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.910413] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:43.823 [2024-07-20 17:07:59.910421] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:43.823 [2024-07-20 17:07:59.910429] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:43.823 [2024-07-20 17:07:59.910457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.917804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.917830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.925802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.925829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.933803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.933829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.941805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.941832] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:43.823 [2024-07-20 17:07:59.941842] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:43.823 [2024-07-20 17:07:59.941848] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:43.823 [2024-07-20 17:07:59.941855] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:43.823 [2024-07-20 17:07:59.941865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:43.823 [2024-07-20 17:07:59.941876] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:43.823 [2024-07-20 17:07:59.941885] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:43.823 [2024-07-20 17:07:59.941894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.941905] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:43.823 [2024-07-20 17:07:59.941913] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.823 [2024-07-20 17:07:59.941922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.941934] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:43.823 [2024-07-20 17:07:59.941942] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:43.823 [2024-07-20 17:07:59.941951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.949802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.949833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.949849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.949861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:43.823 ===================================================== 00:15:43.823 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.823 ===================================================== 00:15:43.823 Controller Capabilities/Features 00:15:43.823 ================================ 00:15:43.823 Vendor ID: 4e58 00:15:43.823 Subsystem Vendor ID: 4e58 00:15:43.823 Serial Number: SPDK2 00:15:43.823 Model Number: SPDK bdev Controller 00:15:43.823 Firmware Version: 24.01.1 00:15:43.823 Recommended Arb Burst: 6 00:15:43.823 IEEE OUI Identifier: 8d 6b 50 00:15:43.823 Multi-path I/O 00:15:43.823 May have multiple subsystem ports: Yes 00:15:43.823 May have multiple controllers: Yes 00:15:43.823 Associated with SR-IOV VF: No 00:15:43.823 Max Data Transfer Size: 131072 00:15:43.823 Max Number of Namespaces: 32 00:15:43.823 Max Number of I/O Queues: 127 00:15:43.823 NVMe Specification Version (VS): 1.3 00:15:43.823 NVMe Specification Version (Identify): 1.3 00:15:43.823 Maximum Queue Entries: 256 00:15:43.823 Contiguous Queues Required: Yes 00:15:43.823 Arbitration Mechanisms Supported 00:15:43.823 Weighted Round Robin: Not Supported 00:15:43.823 Vendor Specific: Not Supported 00:15:43.823 Reset Timeout: 15000 ms 00:15:43.823 Doorbell Stride: 4 bytes 00:15:43.823 NVM Subsystem Reset: Not Supported 00:15:43.823 Command Sets Supported 00:15:43.823 NVM Command Set: Supported 00:15:43.823 Boot Partition: Not Supported 00:15:43.823 Memory Page Size Minimum: 4096 bytes 00:15:43.823 Memory Page Size Maximum: 4096 bytes 00:15:43.823 Persistent Memory Region: Not Supported 00:15:43.823 Optional Asynchronous Events Supported 00:15:43.823 Namespace Attribute Notices: Supported 00:15:43.823 Firmware Activation Notices: Not Supported 00:15:43.823 ANA Change Notices: Not Supported 00:15:43.823 PLE Aggregate Log Change Notices: Not Supported 00:15:43.823 LBA Status Info Alert Notices: Not Supported 00:15:43.823 EGE Aggregate Log Change Notices: Not Supported 00:15:43.823 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.823 Zone Descriptor Change Notices: Not Supported 00:15:43.823 Discovery Log Change Notices: Not Supported 00:15:43.823 Controller Attributes 00:15:43.823 128-bit Host Identifier: Supported 00:15:43.823 Non-Operational Permissive Mode: Not Supported 00:15:43.823 NVM Sets: Not Supported 00:15:43.823 Read Recovery Levels: Not Supported 00:15:43.823 Endurance Groups: Not Supported 00:15:43.823 Predictable Latency Mode: Not Supported 00:15:43.823 Traffic Based Keep ALive: Not Supported 00:15:43.823 Namespace Granularity: Not Supported 00:15:43.823 SQ Associations: Not Supported 00:15:43.823 UUID List: Not Supported 00:15:43.823 Multi-Domain Subsystem: Not Supported 00:15:43.823 Fixed Capacity Management: Not Supported 00:15:43.823 Variable Capacity Management: Not Supported 00:15:43.823 Delete Endurance Group: Not Supported 00:15:43.823 Delete NVM Set: Not Supported 00:15:43.823 Extended LBA Formats Supported: Not Supported 00:15:43.823 Flexible Data Placement Supported: Not Supported 00:15:43.823 00:15:43.823 Controller Memory Buffer Support 00:15:43.823 ================================ 00:15:43.823 Supported: No 00:15:43.823 00:15:43.823 Persistent Memory Region Support 00:15:43.823 ================================ 00:15:43.823 Supported: No 00:15:43.823 00:15:43.823 Admin Command Set Attributes 00:15:43.823 ============================ 00:15:43.823 Security Send/Receive: Not Supported 00:15:43.823 Format NVM: Not Supported 00:15:43.823 Firmware Activate/Download: Not Supported 00:15:43.823 Namespace Management: Not Supported 00:15:43.823 Device Self-Test: Not Supported 00:15:43.823 Directives: Not Supported 00:15:43.823 NVMe-MI: Not Supported 00:15:43.823 Virtualization Management: Not Supported 00:15:43.823 Doorbell Buffer Config: Not Supported 00:15:43.823 Get LBA Status Capability: Not Supported 00:15:43.823 Command & Feature Lockdown Capability: Not Supported 00:15:43.823 Abort Command Limit: 4 00:15:43.823 Async Event Request Limit: 4 00:15:43.823 Number of Firmware Slots: N/A 00:15:43.823 Firmware Slot 1 Read-Only: N/A 00:15:43.823 Firmware Activation Without Reset: N/A 00:15:43.823 Multiple Update Detection Support: N/A 00:15:43.823 Firmware Update Granularity: No Information Provided 00:15:43.823 Per-Namespace SMART Log: No 00:15:43.823 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.823 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:43.823 Command Effects Log Page: Supported 00:15:43.823 Get Log Page Extended Data: Supported 00:15:43.823 Telemetry Log Pages: Not Supported 00:15:43.823 Persistent Event Log Pages: Not Supported 00:15:43.823 Supported Log Pages Log Page: May Support 00:15:43.823 Commands Supported & Effects Log Page: Not Supported 00:15:43.823 Feature Identifiers & Effects Log Page:May Support 00:15:43.823 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.823 Data Area 4 for Telemetry Log: Not Supported 00:15:43.823 Error Log Page Entries Supported: 128 00:15:43.823 Keep Alive: Supported 00:15:43.823 Keep Alive Granularity: 10000 ms 00:15:43.823 00:15:43.823 NVM Command Set Attributes 00:15:43.823 ========================== 00:15:43.823 Submission Queue Entry Size 00:15:43.823 Max: 64 00:15:43.823 Min: 64 00:15:43.823 Completion Queue Entry Size 00:15:43.823 Max: 16 00:15:43.823 Min: 16 00:15:43.823 Number of Namespaces: 32 00:15:43.823 Compare Command: Supported 00:15:43.823 Write Uncorrectable Command: Not Supported 00:15:43.823 Dataset Management Command: Supported 00:15:43.823 Write Zeroes Command: Supported 00:15:43.823 Set Features Save Field: Not Supported 00:15:43.823 Reservations: Not Supported 00:15:43.823 Timestamp: Not Supported 00:15:43.823 Copy: Supported 00:15:43.823 Volatile Write Cache: Present 00:15:43.823 Atomic Write Unit (Normal): 1 00:15:43.823 Atomic Write Unit (PFail): 1 00:15:43.823 Atomic Compare & Write Unit: 1 00:15:43.823 Fused Compare & Write: Supported 00:15:43.823 Scatter-Gather List 00:15:43.823 SGL Command Set: Supported (Dword aligned) 00:15:43.823 SGL Keyed: Not Supported 00:15:43.823 SGL Bit Bucket Descriptor: Not Supported 00:15:43.823 SGL Metadata Pointer: Not Supported 00:15:43.823 Oversized SGL: Not Supported 00:15:43.823 SGL Metadata Address: Not Supported 00:15:43.823 SGL Offset: Not Supported 00:15:43.823 Transport SGL Data Block: Not Supported 00:15:43.823 Replay Protected Memory Block: Not Supported 00:15:43.823 00:15:43.823 Firmware Slot Information 00:15:43.823 ========================= 00:15:43.823 Active slot: 1 00:15:43.823 Slot 1 Firmware Revision: 24.01.1 00:15:43.823 00:15:43.823 00:15:43.823 Commands Supported and Effects 00:15:43.823 ============================== 00:15:43.823 Admin Commands 00:15:43.823 -------------- 00:15:43.823 Get Log Page (02h): Supported 00:15:43.823 Identify (06h): Supported 00:15:43.823 Abort (08h): Supported 00:15:43.823 Set Features (09h): Supported 00:15:43.823 Get Features (0Ah): Supported 00:15:43.823 Asynchronous Event Request (0Ch): Supported 00:15:43.823 Keep Alive (18h): Supported 00:15:43.823 I/O Commands 00:15:43.823 ------------ 00:15:43.823 Flush (00h): Supported LBA-Change 00:15:43.823 Write (01h): Supported LBA-Change 00:15:43.823 Read (02h): Supported 00:15:43.823 Compare (05h): Supported 00:15:43.823 Write Zeroes (08h): Supported LBA-Change 00:15:43.823 Dataset Management (09h): Supported LBA-Change 00:15:43.823 Copy (19h): Supported LBA-Change 00:15:43.823 Unknown (79h): Supported LBA-Change 00:15:43.823 Unknown (7Ah): Supported 00:15:43.823 00:15:43.823 Error Log 00:15:43.823 ========= 00:15:43.823 00:15:43.823 Arbitration 00:15:43.823 =========== 00:15:43.823 Arbitration Burst: 1 00:15:43.823 00:15:43.823 Power Management 00:15:43.823 ================ 00:15:43.823 Number of Power States: 1 00:15:43.823 Current Power State: Power State #0 00:15:43.823 Power State #0: 00:15:43.823 Max Power: 0.00 W 00:15:43.823 Non-Operational State: Operational 00:15:43.823 Entry Latency: Not Reported 00:15:43.823 Exit Latency: Not Reported 00:15:43.823 Relative Read Throughput: 0 00:15:43.823 Relative Read Latency: 0 00:15:43.823 Relative Write Throughput: 0 00:15:43.823 Relative Write Latency: 0 00:15:43.823 Idle Power: Not Reported 00:15:43.823 Active Power: Not Reported 00:15:43.823 Non-Operational Permissive Mode: Not Supported 00:15:43.823 00:15:43.823 Health Information 00:15:43.823 ================== 00:15:43.823 Critical Warnings: 00:15:43.823 Available Spare Space: OK 00:15:43.823 Temperature: OK 00:15:43.823 Device Reliability: OK 00:15:43.823 Read Only: No 00:15:43.823 Volatile Memory Backup: OK 00:15:43.823 Current Temperature: 0 Kelvin[2024-07-20 17:07:59.949980] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:43.823 [2024-07-20 17:07:59.957821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.957869] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:43.823 [2024-07-20 17:07:59.957887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.957899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.957909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.957918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.823 [2024-07-20 17:07:59.957981] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.823 [2024-07-20 17:07:59.958002] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:43.823 [2024-07-20 17:07:59.959022] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:43.823 [2024-07-20 17:07:59.959039] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:43.823 [2024-07-20 17:07:59.959997] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:43.823 [2024-07-20 17:07:59.960021] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:43.824 [2024-07-20 17:07:59.960072] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:43.824 [2024-07-20 17:07:59.962822] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:44.081 (-273 Celsius) 00:15:44.081 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:44.081 Available Spare: 0% 00:15:44.081 Available Spare Threshold: 0% 00:15:44.081 Life Percentage Used: 0% 00:15:44.081 Data Units Read: 0 00:15:44.081 Data Units Written: 0 00:15:44.081 Host Read Commands: 0 00:15:44.081 Host Write Commands: 0 00:15:44.081 Controller Busy Time: 0 minutes 00:15:44.081 Power Cycles: 0 00:15:44.081 Power On Hours: 0 hours 00:15:44.081 Unsafe Shutdowns: 0 00:15:44.081 Unrecoverable Media Errors: 0 00:15:44.081 Lifetime Error Log Entries: 0 00:15:44.081 Warning Temperature Time: 0 minutes 00:15:44.081 Critical Temperature Time: 0 minutes 00:15:44.081 00:15:44.081 Number of Queues 00:15:44.081 ================ 00:15:44.081 Number of I/O Submission Queues: 127 00:15:44.081 Number of I/O Completion Queues: 127 00:15:44.081 00:15:44.081 Active Namespaces 00:15:44.081 ================= 00:15:44.081 Namespace ID:1 00:15:44.081 Error Recovery Timeout: Unlimited 00:15:44.081 Command Set Identifier: NVM (00h) 00:15:44.081 Deallocate: Supported 00:15:44.081 Deallocated/Unwritten Error: Not Supported 00:15:44.081 Deallocated Read Value: Unknown 00:15:44.081 Deallocate in Write Zeroes: Not Supported 00:15:44.081 Deallocated Guard Field: 0xFFFF 00:15:44.081 Flush: Supported 00:15:44.081 Reservation: Supported 00:15:44.081 Namespace Sharing Capabilities: Multiple Controllers 00:15:44.081 Size (in LBAs): 131072 (0GiB) 00:15:44.081 Capacity (in LBAs): 131072 (0GiB) 00:15:44.081 Utilization (in LBAs): 131072 (0GiB) 00:15:44.081 NGUID: F2E59488306E40A48516FD46A83D85DE 00:15:44.081 UUID: f2e59488-306e-40a4-8516-fd46a83d85de 00:15:44.081 Thin Provisioning: Not Supported 00:15:44.081 Per-NS Atomic Units: Yes 00:15:44.081 Atomic Boundary Size (Normal): 0 00:15:44.081 Atomic Boundary Size (PFail): 0 00:15:44.081 Atomic Boundary Offset: 0 00:15:44.081 Maximum Single Source Range Length: 65535 00:15:44.081 Maximum Copy Length: 65535 00:15:44.081 Maximum Source Range Count: 1 00:15:44.081 NGUID/EUI64 Never Reused: No 00:15:44.081 Namespace Write Protected: No 00:15:44.081 Number of LBA Formats: 1 00:15:44.081 Current LBA Format: LBA Format #00 00:15:44.081 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:44.081 00:15:44.081 17:08:00 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:44.081 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.336 Initializing NVMe Controllers 00:15:49.336 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.336 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:49.336 Initialization complete. Launching workers. 00:15:49.336 ======================================================== 00:15:49.336 Latency(us) 00:15:49.336 Device Information : IOPS MiB/s Average min max 00:15:49.336 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37875.78 147.95 3379.03 1152.96 8484.94 00:15:49.336 ======================================================== 00:15:49.336 Total : 37875.78 147.95 3379.03 1152.96 8484.94 00:15:49.336 00:15:49.336 17:08:05 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:49.336 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.600 Initializing NVMe Controllers 00:15:54.600 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:54.600 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:54.600 Initialization complete. Launching workers. 00:15:54.600 ======================================================== 00:15:54.600 Latency(us) 00:15:54.600 Device Information : IOPS MiB/s Average min max 00:15:54.600 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36206.71 141.43 3535.68 1150.34 9521.94 00:15:54.600 ======================================================== 00:15:54.600 Total : 36206.71 141.43 3535.68 1150.34 9521.94 00:15:54.600 00:15:54.600 17:08:10 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:54.600 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.856 Initializing NVMe Controllers 00:15:59.856 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.856 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:59.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:59.856 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:59.856 Initialization complete. Launching workers. 00:15:59.856 Starting thread on core 2 00:15:59.856 Starting thread on core 3 00:15:59.856 Starting thread on core 1 00:15:59.857 17:08:15 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:59.857 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.037 Initializing NVMe Controllers 00:16:04.037 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.037 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.037 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:04.037 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:04.037 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:04.037 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:04.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:04.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:04.037 Initialization complete. Launching workers. 00:16:04.037 Starting thread on core 1 with urgent priority queue 00:16:04.037 Starting thread on core 2 with urgent priority queue 00:16:04.037 Starting thread on core 3 with urgent priority queue 00:16:04.037 Starting thread on core 0 with urgent priority queue 00:16:04.037 SPDK bdev Controller (SPDK2 ) core 0: 1586.67 IO/s 63.03 secs/100000 ios 00:16:04.037 SPDK bdev Controller (SPDK2 ) core 1: 1683.33 IO/s 59.41 secs/100000 ios 00:16:04.037 SPDK bdev Controller (SPDK2 ) core 2: 1656.67 IO/s 60.36 secs/100000 ios 00:16:04.037 SPDK bdev Controller (SPDK2 ) core 3: 1801.67 IO/s 55.50 secs/100000 ios 00:16:04.037 ======================================================== 00:16:04.037 00:16:04.037 17:08:19 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:04.037 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.037 Initializing NVMe Controllers 00:16:04.037 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.037 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.037 Namespace ID: 1 size: 0GB 00:16:04.037 Initialization complete. 00:16:04.037 INFO: using host memory buffer for IO 00:16:04.037 Hello world! 00:16:04.037 17:08:19 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:04.037 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.002 Initializing NVMe Controllers 00:16:05.003 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.003 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.003 Initialization complete. Launching workers. 00:16:05.003 submit (in ns) avg, min, max = 7338.4, 3417.8, 4016336.7 00:16:05.003 complete (in ns) avg, min, max = 24725.5, 2028.9, 4128440.0 00:16:05.003 00:16:05.003 Submit histogram 00:16:05.003 ================ 00:16:05.003 Range in us Cumulative Count 00:16:05.003 3.413 - 3.437: 0.4314% ( 59) 00:16:05.003 3.437 - 3.461: 1.3162% ( 121) 00:16:05.003 3.461 - 3.484: 3.5975% ( 312) 00:16:05.003 3.484 - 3.508: 9.0597% ( 747) 00:16:05.003 3.508 - 3.532: 16.6277% ( 1035) 00:16:05.003 3.532 - 3.556: 26.0529% ( 1289) 00:16:05.003 3.556 - 3.579: 34.7031% ( 1183) 00:16:05.003 3.579 - 3.603: 42.2931% ( 1038) 00:16:05.003 3.603 - 3.627: 49.1152% ( 933) 00:16:05.003 3.627 - 3.650: 55.1331% ( 823) 00:16:05.003 3.650 - 3.674: 59.0743% ( 539) 00:16:05.003 3.674 - 3.698: 62.2770% ( 438) 00:16:05.003 3.698 - 3.721: 65.6259% ( 458) 00:16:05.003 3.721 - 3.745: 69.3916% ( 515) 00:16:05.003 3.745 - 3.769: 73.2743% ( 531) 00:16:05.003 3.769 - 3.793: 77.5592% ( 586) 00:16:05.003 3.793 - 3.816: 81.2006% ( 498) 00:16:05.003 3.816 - 3.840: 84.1401% ( 402) 00:16:05.003 3.840 - 3.864: 86.5311% ( 327) 00:16:05.003 3.864 - 3.887: 88.2641% ( 237) 00:16:05.003 3.887 - 3.911: 89.6900% ( 195) 00:16:05.003 3.911 - 3.935: 91.0354% ( 184) 00:16:05.003 3.935 - 3.959: 92.1030% ( 146) 00:16:05.003 3.959 - 3.982: 92.8707% ( 105) 00:16:05.003 3.982 - 4.006: 93.9383% ( 146) 00:16:05.003 4.006 - 4.030: 94.7719% ( 114) 00:16:05.003 4.030 - 4.053: 95.3641% ( 81) 00:16:05.003 4.053 - 4.077: 95.8467% ( 66) 00:16:05.003 4.077 - 4.101: 96.1758% ( 45) 00:16:05.003 4.101 - 4.124: 96.4390% ( 36) 00:16:05.003 4.124 - 4.148: 96.6876% ( 34) 00:16:05.003 4.148 - 4.172: 96.8046% ( 16) 00:16:05.003 4.172 - 4.196: 96.8777% ( 10) 00:16:05.003 4.196 - 4.219: 96.9874% ( 15) 00:16:05.003 4.219 - 4.243: 97.1264% ( 19) 00:16:05.003 4.243 - 4.267: 97.2287% ( 14) 00:16:05.003 4.267 - 4.290: 97.2872% ( 8) 00:16:05.003 4.290 - 4.314: 97.3823% ( 13) 00:16:05.003 4.314 - 4.338: 97.4188% ( 5) 00:16:05.003 4.338 - 4.361: 97.4920% ( 10) 00:16:05.003 4.361 - 4.385: 97.5431% ( 7) 00:16:05.003 4.385 - 4.409: 97.5505% ( 1) 00:16:05.003 4.409 - 4.433: 97.5724% ( 3) 00:16:05.003 4.480 - 4.504: 97.5870% ( 2) 00:16:05.003 4.504 - 4.527: 97.6089% ( 3) 00:16:05.003 4.575 - 4.599: 97.6163% ( 1) 00:16:05.003 4.599 - 4.622: 97.6382% ( 3) 00:16:05.003 4.646 - 4.670: 97.6455% ( 1) 00:16:05.003 4.670 - 4.693: 97.6601% ( 2) 00:16:05.003 4.693 - 4.717: 97.6748% ( 2) 00:16:05.003 4.717 - 4.741: 97.7113% ( 5) 00:16:05.003 4.741 - 4.764: 97.7552% ( 6) 00:16:05.003 4.764 - 4.788: 97.7991% ( 6) 00:16:05.003 4.788 - 4.812: 97.8064% ( 1) 00:16:05.003 4.812 - 4.836: 97.8795% ( 10) 00:16:05.003 4.836 - 4.859: 97.9380% ( 8) 00:16:05.003 4.859 - 4.883: 97.9672% ( 4) 00:16:05.003 4.883 - 4.907: 98.0477% ( 11) 00:16:05.003 4.907 - 4.930: 98.0550% ( 1) 00:16:05.003 4.930 - 4.954: 98.1135% ( 8) 00:16:05.003 4.954 - 4.978: 98.1354% ( 3) 00:16:05.003 4.978 - 5.001: 98.1720% ( 5) 00:16:05.003 5.001 - 5.025: 98.1866% ( 2) 00:16:05.003 5.025 - 5.049: 98.2159% ( 4) 00:16:05.003 5.049 - 5.073: 98.2378% ( 3) 00:16:05.003 5.073 - 5.096: 98.2817% ( 6) 00:16:05.003 5.096 - 5.120: 98.3109% ( 4) 00:16:05.003 5.120 - 5.144: 98.3255% ( 2) 00:16:05.003 5.144 - 5.167: 98.3548% ( 4) 00:16:05.003 5.167 - 5.191: 98.3767% ( 3) 00:16:05.003 5.191 - 5.215: 98.3987% ( 3) 00:16:05.003 5.215 - 5.239: 98.4060% ( 1) 00:16:05.003 5.239 - 5.262: 98.4206% ( 2) 00:16:05.003 5.262 - 5.286: 98.4352% ( 2) 00:16:05.003 5.286 - 5.310: 98.4425% ( 1) 00:16:05.003 5.310 - 5.333: 98.4498% ( 1) 00:16:05.003 5.357 - 5.381: 98.4645% ( 2) 00:16:05.003 5.428 - 5.452: 98.4718% ( 1) 00:16:05.003 5.452 - 5.476: 98.4791% ( 1) 00:16:05.003 5.547 - 5.570: 98.4937% ( 2) 00:16:05.003 5.618 - 5.641: 98.5010% ( 1) 00:16:05.003 5.665 - 5.689: 98.5083% ( 1) 00:16:05.003 5.807 - 5.831: 98.5156% ( 1) 00:16:05.003 5.855 - 5.879: 98.5230% ( 1) 00:16:05.003 6.068 - 6.116: 98.5303% ( 1) 00:16:05.003 6.210 - 6.258: 98.5376% ( 1) 00:16:05.003 6.732 - 6.779: 98.5449% ( 1) 00:16:05.003 6.827 - 6.874: 98.5668% ( 3) 00:16:05.003 6.921 - 6.969: 98.5741% ( 1) 00:16:05.003 7.206 - 7.253: 98.5815% ( 1) 00:16:05.003 7.301 - 7.348: 98.5961% ( 2) 00:16:05.003 7.396 - 7.443: 98.6180% ( 3) 00:16:05.003 7.443 - 7.490: 98.6253% ( 1) 00:16:05.003 7.538 - 7.585: 98.6326% ( 1) 00:16:05.003 7.585 - 7.633: 98.6473% ( 2) 00:16:05.003 7.633 - 7.680: 98.6619% ( 2) 00:16:05.003 7.680 - 7.727: 98.6765% ( 2) 00:16:05.003 7.775 - 7.822: 98.6838% ( 1) 00:16:05.003 7.822 - 7.870: 98.7058% ( 3) 00:16:05.003 7.870 - 7.917: 98.7277% ( 3) 00:16:05.003 7.917 - 7.964: 98.7350% ( 1) 00:16:05.003 8.012 - 8.059: 98.7423% ( 1) 00:16:05.003 8.059 - 8.107: 98.7496% ( 1) 00:16:05.003 8.154 - 8.201: 98.7643% ( 2) 00:16:05.003 8.296 - 8.344: 98.7862% ( 3) 00:16:05.003 8.344 - 8.391: 98.7935% ( 1) 00:16:05.003 8.439 - 8.486: 98.8008% ( 1) 00:16:05.003 8.581 - 8.628: 98.8081% ( 1) 00:16:05.003 8.960 - 9.007: 98.8154% ( 1) 00:16:05.003 9.007 - 9.055: 98.8228% ( 1) 00:16:05.003 9.102 - 9.150: 98.8301% ( 1) 00:16:05.003 9.150 - 9.197: 98.8374% ( 1) 00:16:05.003 9.197 - 9.244: 98.8447% ( 1) 00:16:05.003 9.387 - 9.434: 98.8520% ( 1) 00:16:05.003 9.529 - 9.576: 98.8593% ( 1) 00:16:05.003 9.624 - 9.671: 98.8739% ( 2) 00:16:05.003 9.671 - 9.719: 98.8886% ( 2) 00:16:05.003 9.813 - 9.861: 98.8959% ( 1) 00:16:05.003 10.003 - 10.050: 98.9105% ( 2) 00:16:05.003 10.240 - 10.287: 98.9178% ( 1) 00:16:05.003 10.335 - 10.382: 98.9251% ( 1) 00:16:05.003 10.667 - 10.714: 98.9324% ( 1) 00:16:05.003 11.093 - 11.141: 98.9397% ( 1) 00:16:05.003 11.141 - 11.188: 98.9471% ( 1) 00:16:05.003 11.283 - 11.330: 98.9617% ( 2) 00:16:05.003 11.330 - 11.378: 98.9690% ( 1) 00:16:05.003 11.615 - 11.662: 98.9763% ( 1) 00:16:05.003 11.804 - 11.852: 98.9836% ( 1) 00:16:05.003 12.089 - 12.136: 98.9909% ( 1) 00:16:05.003 12.136 - 12.231: 98.9982% ( 1) 00:16:05.003 13.653 - 13.748: 99.0129% ( 2) 00:16:05.003 13.843 - 13.938: 99.0202% ( 1) 00:16:05.003 14.222 - 14.317: 99.0421% ( 3) 00:16:05.003 14.507 - 14.601: 99.0494% ( 1) 00:16:05.003 14.886 - 14.981: 99.0567% ( 1) 00:16:05.003 16.877 - 16.972: 99.0787% ( 3) 00:16:05.003 16.972 - 17.067: 99.0860% ( 1) 00:16:05.003 17.256 - 17.351: 99.0933% ( 1) 00:16:05.003 17.351 - 17.446: 99.1299% ( 5) 00:16:05.003 17.446 - 17.541: 99.1664% ( 5) 00:16:05.003 17.541 - 17.636: 99.2103% ( 6) 00:16:05.003 17.636 - 17.730: 99.2469% ( 5) 00:16:05.003 17.730 - 17.825: 99.2615% ( 2) 00:16:05.003 17.825 - 17.920: 99.2980% ( 5) 00:16:05.003 17.920 - 18.015: 99.3273% ( 4) 00:16:05.003 18.015 - 18.110: 99.3712% ( 6) 00:16:05.003 18.110 - 18.204: 99.4370% ( 9) 00:16:05.003 18.204 - 18.299: 99.5320% ( 13) 00:16:05.003 18.299 - 18.394: 99.5759% ( 6) 00:16:05.003 18.394 - 18.489: 99.6051% ( 4) 00:16:05.003 18.489 - 18.584: 99.6636% ( 8) 00:16:05.003 18.584 - 18.679: 99.6783% ( 2) 00:16:05.003 18.679 - 18.773: 99.7002% ( 3) 00:16:05.003 18.773 - 18.868: 99.7295% ( 4) 00:16:05.003 18.868 - 18.963: 99.7368% ( 1) 00:16:05.003 18.963 - 19.058: 99.7514% ( 2) 00:16:05.003 19.058 - 19.153: 99.7587% ( 1) 00:16:05.003 19.153 - 19.247: 99.7733% ( 2) 00:16:05.003 19.247 - 19.342: 99.7806% ( 1) 00:16:05.003 19.342 - 19.437: 99.7879% ( 1) 00:16:05.003 19.437 - 19.532: 99.8026% ( 2) 00:16:05.003 19.532 - 19.627: 99.8099% ( 1) 00:16:05.003 19.627 - 19.721: 99.8318% ( 3) 00:16:05.003 19.721 - 19.816: 99.8391% ( 1) 00:16:05.003 19.911 - 20.006: 99.8464% ( 1) 00:16:05.003 20.290 - 20.385: 99.8538% ( 1) 00:16:05.003 20.575 - 20.670: 99.8611% ( 1) 00:16:05.003 21.239 - 21.333: 99.8684% ( 1) 00:16:05.003 21.333 - 21.428: 99.8757% ( 1) 00:16:05.003 21.428 - 21.523: 99.8830% ( 1) 00:16:05.003 21.618 - 21.713: 99.8903% ( 1) 00:16:05.003 22.376 - 22.471: 99.8976% ( 1) 00:16:05.003 23.230 - 23.324: 99.9049% ( 1) 00:16:05.003 28.255 - 28.444: 99.9123% ( 1) 00:16:05.003 3859.342 - 3883.615: 99.9196% ( 1) 00:16:05.003 3980.705 - 4004.978: 99.9927% ( 10) 00:16:05.003 4004.978 - 4029.250: 100.0000% ( 1) 00:16:05.003 00:16:05.003 Complete histogram 00:16:05.003 ================== 00:16:05.003 Range in us Cumulative Count 00:16:05.003 2.027 - 2.039: 1.8573% ( 254) 00:16:05.004 2.039 - 2.050: 16.1816% ( 1959) 00:16:05.004 2.050 - 2.062: 19.3989% ( 440) 00:16:05.004 2.062 - 2.074: 34.1913% ( 2023) 00:16:05.004 2.074 - 2.086: 52.8225% ( 2548) 00:16:05.004 2.086 - 2.098: 56.9903% ( 570) 00:16:05.004 2.098 - 2.110: 60.6171% ( 496) 00:16:05.004 2.110 - 2.121: 64.5584% ( 539) 00:16:05.004 2.121 - 2.133: 65.9184% ( 186) 00:16:05.004 2.133 - 2.145: 73.8301% ( 1082) 00:16:05.004 2.145 - 2.157: 79.7455% ( 809) 00:16:05.004 2.157 - 2.169: 81.4419% ( 232) 00:16:05.004 2.169 - 2.181: 83.8696% ( 332) 00:16:05.004 2.181 - 2.193: 86.0339% ( 296) 00:16:05.004 2.193 - 2.204: 87.2331% ( 164) 00:16:05.004 2.204 - 2.216: 91.0500% ( 522) 00:16:05.004 2.216 - 2.228: 93.7043% ( 363) 00:16:05.004 2.228 - 2.240: 94.4940% ( 108) 00:16:05.004 2.240 - 2.252: 95.0351% ( 74) 00:16:05.004 2.252 - 2.264: 95.2910% ( 35) 00:16:05.004 2.264 - 2.276: 95.4592% ( 23) 00:16:05.004 2.276 - 2.287: 95.8029% ( 47) 00:16:05.004 2.287 - 2.299: 95.8467% ( 6) 00:16:05.004 2.299 - 2.311: 95.9125% ( 9) 00:16:05.004 2.311 - 2.323: 96.0734% ( 22) 00:16:05.004 2.323 - 2.335: 96.4390% ( 50) 00:16:05.004 2.335 - 2.347: 96.6511% ( 29) 00:16:05.004 2.347 - 2.359: 97.1044% ( 62) 00:16:05.004 2.359 - 2.370: 97.4554% ( 48) 00:16:05.004 2.370 - 2.382: 97.6163% ( 22) 00:16:05.004 2.382 - 2.394: 97.7625% ( 20) 00:16:05.004 2.394 - 2.406: 97.9161% ( 21) 00:16:05.004 2.406 - 2.418: 98.0038% ( 12) 00:16:05.004 2.418 - 2.430: 98.1500% ( 20) 00:16:05.004 2.430 - 2.441: 98.2670% ( 16) 00:16:05.004 2.441 - 2.453: 98.3036% ( 5) 00:16:05.004 2.453 - 2.465: 98.3109% ( 1) 00:16:05.004 2.465 - 2.477: 98.3475% ( 5) 00:16:05.004 2.477 - 2.489: 98.3621% ( 2) 00:16:05.004 2.489 - 2.501: 98.3694% ( 1) 00:16:05.004 2.501 - 2.513: 98.3840% ( 2) 00:16:05.004 2.513 - 2.524: 98.4133% ( 4) 00:16:05.004 2.536 - 2.548: 98.4206% ( 1) 00:16:05.004 2.548 - 2.560: 98.4279% ( 1) 00:16:05.004 2.572 - 2.584: 98.4425% ( 2) 00:16:05.004 2.584 - 2.596: 98.4498% ( 1) 00:16:05.004 2.607 - 2.619: 98.4645% ( 2) 00:16:05.004 2.619 - 2.631: 98.4718% ( 1) 00:16:05.004 2.643 - 2.655: 98.4791% ( 1) 00:16:05.004 2.690 - 2.702: 98.4864% ( 1) 00:16:05.004 2.714 - 2.726: 98.4937% ( 1) 00:16:05.004 2.750 - 2.761: 98.5010% ( 1) 00:16:05.004 2.797 - 2.809: 98.5083% ( 1) 00:16:05.004 2.821 - 2.833: 98.5156% ( 1) 00:16:05.004 2.856 - 2.868: 98.5303% ( 2) 00:16:05.004 2.904 - 2.916: 98.5376% ( 1) 00:16:05.004 2.916 - 2.927: 98.5449% ( 1) 00:16:05.004 2.939 - 2.951: 98.5522% ( 1) 00:16:05.004 3.058 - 3.081: 98.5668% ( 2) 00:16:05.004 3.200 - 3.224: 98.5888% ( 3) 00:16:05.004 3.224 - 3.247: 98.5961% ( 1) 00:16:05.004 3.247 - 3.271: 98.6034% ( 1) 00:16:05.004 3.271 - 3.295: 98.6107% ( 1) 00:16:05.004 3.295 - 3.319: 98.6180% ( 1) 00:16:05.004 3.342 - 3.366: 98.6326% ( 2) 00:16:05.004 3.366 - 3.390: 98.6400% ( 1) 00:16:05.004 3.390 - 3.413: 98.6473% ( 1) 00:16:05.004 3.413 - 3.437: 98.6619% ( 2) 00:16:05.004 3.437 - 3.461: 98.6692% ( 1) 00:16:05.004 3.579 - 3.603: 98.6765% ( 1) 00:16:05.004 3.603 - 3.627: 98.6911% ( 2) 00:16:05.004 3.627 - 3.650: 98.7058% ( 2) 00:16:05.004 3.650 - 3.674: 98.7131% ( 1) 00:16:05.004 3.698 - 3.721: 98.7204% ( 1) 00:16:05.004 3.721 - 3.745: 98.7277% ( 1) 00:16:05.004 3.745 - 3.769: 98.7350% ( 1) 00:16:05.004 3.769 - 3.793: 98.7423% ( 1) 00:16:05.004 3.793 - 3.816: 98.7569% ( 2) 00:16:05.004 3.911 - 3.935: 98.7643% ( 1) 00:16:05.004 3.935 - 3.959: 98.7716% ( 1) 00:16:05.004 4.030 - 4.053: 98.7789% ( 1) 00:16:05.004 4.053 - 4.077: 98.7862% ( 1) 00:16:05.004 4.314 - 4.338: 98.7935% ( 1) 00:16:05.004 5.215 - 5.239: 98.8008% ( 1) 00:16:05.004 5.310 - 5.333: 98.8081% ( 1) 00:16:05.004 5.381 - 5.404: 98.8154% ( 1) 00:16:05.004 5.570 - 5.594: 98.8228% ( 1) 00:16:05.004 5.594 - 5.618: 98.8301% ( 1) 00:16:05.004 5.784 - 5.807: 98.8374% ( 1) 00:16:05.004 5.807 - 5.831: 98.8520% ( 2) 00:16:05.004 5.831 - 5.855: 98.8593% ( 1) 00:16:05.004 6.116 - 6.163: 98.8666% ( 1) 00:16:05.004 6.210 - 6.258: 98.8739% ( 1) 00:16:05.004 6.400 - 6.447: 98.8813% ( 1) 00:16:05.004 6.447 - 6.495: 98.8886% ( 1) 00:16:05.004 6.684 - 6.732: 98.8959% ( 1) 00:16:05.004 7.016 - 7.064: 98.9032% ( 1) 00:16:05.004 12.136 - 12.231: 98.9105% ( 1) 00:16:05.004 13.274 - 13.369: 98.9178% ( 1) 00:16:05.004 14.033 - 14.127: 98.9251% ( 1) 00:16:05.004 15.550 - 15.644: 98.9324% ( 1) 00:16:05.004 15.644 - 15.739: 98.9471% ( 2) 00:16:05.004 15.739 - 15.834: 98.9544% ( 1) 00:16:05.004 15.834 - 15.929: 98.9690% ( 2) 00:16:05.004 15.929 - 16.024: 98.9909% ( 3) 00:16:05.004 16.024 - 16.119: 99.0056% ( 2) 00:16:05.004 16.119 - 16.213: 99.0421% ( 5) 00:16:05.004 16.213 - 16.308: 99.0641% ( 3) 00:16:05.004 16.308 - 16.403: 99.0787% ( 2) 00:16:05.004 16.403 - 16.498: 99.1591% ( 11) 00:16:05.004 16.498 - 16.593: 99.2469% ( 12) 00:16:05.004 16.593 - 16.687: 99.2761% ( 4) 00:16:05.004 16.687 - 16.782: 99.3054% ( 4) 00:16:05.004 16.782 - 16.877: 99.3200% ( 2) 00:16:05.004 16.877 - 16.972: 99.3419% ( 3) 00:16:05.004 16.972 - 17.067: 99.3492% ( 1) 00:16:05.004 17.067 - 17.161: 99.3638% ( 2) 00:16:05.004 17.161 - 17.256: 99.3712% ( 1) 00:16:05.004 17.256 - 17.351: 99.3785% ( 1) 00:16:05.004 17.541 - 17.636: 99.3931% ( 2) 00:16:05.004 17.730 - 17.825: 99.4004% ( 1) 00:16:05.004 17.920 - 18.015: 99.4150% ( 2) 00:16:05.004 18.394 - 18.489: 99.4223% ( 1) 00:16:05.004 18.489 - 18.584: 99.4297% ( 1) 00:16:05.004 19.816 - 19.911: 99.4370% ( 1) 00:16:05.004 3980.705 - 4004.978: 99.9049% ( 64) 00:16:05.004 4004.978 - 4029.250: 99.9854% ( 11) 00:16:05.004 4029.250 - 4053.523: 99.9927% ( 1) 00:16:05.004 4126.341 - 4150.613: 100.0000% ( 1) 00:16:05.004 00:16:05.004 17:08:21 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:05.004 17:08:21 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:05.004 17:08:21 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:05.004 17:08:21 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:05.004 17:08:21 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.262 [ 00:16:05.262 { 00:16:05.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.262 "subtype": "Discovery", 00:16:05.262 "listen_addresses": [], 00:16:05.262 "allow_any_host": true, 00:16:05.262 "hosts": [] 00:16:05.262 }, 00:16:05.262 { 00:16:05.262 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.262 "subtype": "NVMe", 00:16:05.262 "listen_addresses": [ 00:16:05.262 { 00:16:05.262 "transport": "VFIOUSER", 00:16:05.262 "trtype": "VFIOUSER", 00:16:05.262 "adrfam": "IPv4", 00:16:05.262 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.262 "trsvcid": "0" 00:16:05.262 } 00:16:05.262 ], 00:16:05.262 "allow_any_host": true, 00:16:05.262 "hosts": [], 00:16:05.262 "serial_number": "SPDK1", 00:16:05.262 "model_number": "SPDK bdev Controller", 00:16:05.262 "max_namespaces": 32, 00:16:05.262 "min_cntlid": 1, 00:16:05.262 "max_cntlid": 65519, 00:16:05.262 "namespaces": [ 00:16:05.262 { 00:16:05.262 "nsid": 1, 00:16:05.262 "bdev_name": "Malloc1", 00:16:05.262 "name": "Malloc1", 00:16:05.262 "nguid": "E44029E14D374089844FF2BEBD2A5B05", 00:16:05.262 "uuid": "e44029e1-4d37-4089-844f-f2bebd2a5b05" 00:16:05.262 }, 00:16:05.262 { 00:16:05.262 "nsid": 2, 00:16:05.262 "bdev_name": "Malloc3", 00:16:05.262 "name": "Malloc3", 00:16:05.262 "nguid": "059B3ED214A549FA8817A3D997AA74A0", 00:16:05.262 "uuid": "059b3ed2-14a5-49fa-8817-a3d997aa74a0" 00:16:05.262 } 00:16:05.262 ] 00:16:05.262 }, 00:16:05.262 { 00:16:05.262 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.262 "subtype": "NVMe", 00:16:05.262 "listen_addresses": [ 00:16:05.262 { 00:16:05.262 "transport": "VFIOUSER", 00:16:05.262 "trtype": "VFIOUSER", 00:16:05.262 "adrfam": "IPv4", 00:16:05.262 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.262 "trsvcid": "0" 00:16:05.262 } 00:16:05.262 ], 00:16:05.262 "allow_any_host": true, 00:16:05.262 "hosts": [], 00:16:05.262 "serial_number": "SPDK2", 00:16:05.262 "model_number": "SPDK bdev Controller", 00:16:05.262 "max_namespaces": 32, 00:16:05.262 "min_cntlid": 1, 00:16:05.262 "max_cntlid": 65519, 00:16:05.262 "namespaces": [ 00:16:05.262 { 00:16:05.262 "nsid": 1, 00:16:05.262 "bdev_name": "Malloc2", 00:16:05.262 "name": "Malloc2", 00:16:05.262 "nguid": "F2E59488306E40A48516FD46A83D85DE", 00:16:05.262 "uuid": "f2e59488-306e-40a4-8516-fd46a83d85de" 00:16:05.262 } 00:16:05.262 ] 00:16:05.262 } 00:16:05.262 ] 00:16:05.262 17:08:21 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:05.262 17:08:21 -- target/nvmf_vfio_user.sh@34 -- # aerpid=517650 00:16:05.262 17:08:21 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:05.263 17:08:21 -- common/autotest_common.sh@1244 -- # local i=0 00:16:05.263 17:08:21 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.263 17:08:21 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:05.263 17:08:21 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:05.263 17:08:21 -- common/autotest_common.sh@1255 -- # return 0 00:16:05.263 17:08:21 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:05.263 17:08:21 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:05.263 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.521 Malloc4 00:16:05.521 17:08:21 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:05.778 17:08:21 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.778 Asynchronous Event Request test 00:16:05.778 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.778 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.778 Registering asynchronous event callbacks... 00:16:05.778 Starting namespace attribute notice tests for all controllers... 00:16:05.778 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:05.778 aer_cb - Changed Namespace 00:16:05.778 Cleaning up... 00:16:06.036 [ 00:16:06.036 { 00:16:06.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:06.036 "subtype": "Discovery", 00:16:06.036 "listen_addresses": [], 00:16:06.036 "allow_any_host": true, 00:16:06.036 "hosts": [] 00:16:06.036 }, 00:16:06.036 { 00:16:06.036 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:06.036 "subtype": "NVMe", 00:16:06.036 "listen_addresses": [ 00:16:06.036 { 00:16:06.036 "transport": "VFIOUSER", 00:16:06.036 "trtype": "VFIOUSER", 00:16:06.036 "adrfam": "IPv4", 00:16:06.036 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:06.036 "trsvcid": "0" 00:16:06.036 } 00:16:06.036 ], 00:16:06.036 "allow_any_host": true, 00:16:06.036 "hosts": [], 00:16:06.036 "serial_number": "SPDK1", 00:16:06.036 "model_number": "SPDK bdev Controller", 00:16:06.036 "max_namespaces": 32, 00:16:06.036 "min_cntlid": 1, 00:16:06.036 "max_cntlid": 65519, 00:16:06.036 "namespaces": [ 00:16:06.036 { 00:16:06.036 "nsid": 1, 00:16:06.036 "bdev_name": "Malloc1", 00:16:06.036 "name": "Malloc1", 00:16:06.036 "nguid": "E44029E14D374089844FF2BEBD2A5B05", 00:16:06.036 "uuid": "e44029e1-4d37-4089-844f-f2bebd2a5b05" 00:16:06.036 }, 00:16:06.036 { 00:16:06.036 "nsid": 2, 00:16:06.036 "bdev_name": "Malloc3", 00:16:06.036 "name": "Malloc3", 00:16:06.036 "nguid": "059B3ED214A549FA8817A3D997AA74A0", 00:16:06.036 "uuid": "059b3ed2-14a5-49fa-8817-a3d997aa74a0" 00:16:06.036 } 00:16:06.036 ] 00:16:06.036 }, 00:16:06.036 { 00:16:06.036 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:06.036 "subtype": "NVMe", 00:16:06.036 "listen_addresses": [ 00:16:06.036 { 00:16:06.036 "transport": "VFIOUSER", 00:16:06.036 "trtype": "VFIOUSER", 00:16:06.037 "adrfam": "IPv4", 00:16:06.037 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:06.037 "trsvcid": "0" 00:16:06.037 } 00:16:06.037 ], 00:16:06.037 "allow_any_host": true, 00:16:06.037 "hosts": [], 00:16:06.037 "serial_number": "SPDK2", 00:16:06.037 "model_number": "SPDK bdev Controller", 00:16:06.037 "max_namespaces": 32, 00:16:06.037 "min_cntlid": 1, 00:16:06.037 "max_cntlid": 65519, 00:16:06.037 "namespaces": [ 00:16:06.037 { 00:16:06.037 "nsid": 1, 00:16:06.037 "bdev_name": "Malloc2", 00:16:06.037 "name": "Malloc2", 00:16:06.037 "nguid": "F2E59488306E40A48516FD46A83D85DE", 00:16:06.037 "uuid": "f2e59488-306e-40a4-8516-fd46a83d85de" 00:16:06.037 }, 00:16:06.037 { 00:16:06.037 "nsid": 2, 00:16:06.037 "bdev_name": "Malloc4", 00:16:06.037 "name": "Malloc4", 00:16:06.037 "nguid": "1F2900C2AC96407D895891DD19C91AAE", 00:16:06.037 "uuid": "1f2900c2-ac96-407d-8958-91dd19c91aae" 00:16:06.037 } 00:16:06.037 ] 00:16:06.037 } 00:16:06.037 ] 00:16:06.037 17:08:22 -- target/nvmf_vfio_user.sh@44 -- # wait 517650 00:16:06.037 17:08:22 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:06.037 17:08:22 -- target/nvmf_vfio_user.sh@95 -- # killprocess 511752 00:16:06.037 17:08:22 -- common/autotest_common.sh@926 -- # '[' -z 511752 ']' 00:16:06.037 17:08:22 -- common/autotest_common.sh@930 -- # kill -0 511752 00:16:06.037 17:08:22 -- common/autotest_common.sh@931 -- # uname 00:16:06.037 17:08:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.037 17:08:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 511752 00:16:06.037 17:08:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:06.037 17:08:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:06.037 17:08:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 511752' 00:16:06.037 killing process with pid 511752 00:16:06.037 17:08:22 -- common/autotest_common.sh@945 -- # kill 511752 00:16:06.037 [2024-07-20 17:08:22.104052] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:06.037 17:08:22 -- common/autotest_common.sh@950 -- # wait 511752 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=517796 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 517796' 00:16:06.295 Process pid: 517796 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:06.295 17:08:22 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 517796 00:16:06.552 17:08:22 -- common/autotest_common.sh@819 -- # '[' -z 517796 ']' 00:16:06.552 17:08:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.552 17:08:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.552 17:08:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.552 17:08:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.552 17:08:22 -- common/autotest_common.sh@10 -- # set +x 00:16:06.552 [2024-07-20 17:08:22.495584] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:06.552 [2024-07-20 17:08:22.496707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:06.552 [2024-07-20 17:08:22.496767] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.552 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.552 [2024-07-20 17:08:22.560758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:06.552 [2024-07-20 17:08:22.651923] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:06.552 [2024-07-20 17:08:22.652099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.552 [2024-07-20 17:08:22.652118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.552 [2024-07-20 17:08:22.652133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.553 [2024-07-20 17:08:22.652224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.553 [2024-07-20 17:08:22.652294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.553 [2024-07-20 17:08:22.652360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.553 [2024-07-20 17:08:22.652358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.810 [2024-07-20 17:08:22.750040] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:06.810 [2024-07-20 17:08:22.750320] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:06.810 [2024-07-20 17:08:22.750579] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:06.810 [2024-07-20 17:08:22.751313] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:06.810 [2024-07-20 17:08:22.751432] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:07.374 17:08:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.374 17:08:23 -- common/autotest_common.sh@852 -- # return 0 00:16:07.374 17:08:23 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:08.306 17:08:24 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:08.563 17:08:24 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:08.563 17:08:24 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:08.563 17:08:24 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.563 17:08:24 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:08.563 17:08:24 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:08.822 Malloc1 00:16:08.822 17:08:24 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:09.079 17:08:25 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:09.336 17:08:25 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:09.593 17:08:25 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:09.593 17:08:25 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:09.593 17:08:25 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:09.850 Malloc2 00:16:09.850 17:08:25 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:10.106 17:08:26 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:10.363 17:08:26 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:10.620 17:08:26 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:10.620 17:08:26 -- target/nvmf_vfio_user.sh@95 -- # killprocess 517796 00:16:10.620 17:08:26 -- common/autotest_common.sh@926 -- # '[' -z 517796 ']' 00:16:10.620 17:08:26 -- common/autotest_common.sh@930 -- # kill -0 517796 00:16:10.620 17:08:26 -- common/autotest_common.sh@931 -- # uname 00:16:10.620 17:08:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:10.620 17:08:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 517796 00:16:10.620 17:08:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:10.620 17:08:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:10.620 17:08:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 517796' 00:16:10.620 killing process with pid 517796 00:16:10.620 17:08:26 -- common/autotest_common.sh@945 -- # kill 517796 00:16:10.620 17:08:26 -- common/autotest_common.sh@950 -- # wait 517796 00:16:10.878 17:08:26 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:10.878 17:08:26 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:10.878 00:16:10.878 real 0m54.257s 00:16:10.878 user 3m34.424s 00:16:10.878 sys 0m4.654s 00:16:10.878 17:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.878 17:08:26 -- common/autotest_common.sh@10 -- # set +x 00:16:10.878 ************************************ 00:16:10.878 END TEST nvmf_vfio_user 00:16:10.878 ************************************ 00:16:10.878 17:08:27 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.878 17:08:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:10.878 17:08:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.878 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:16:10.878 ************************************ 00:16:10.878 START TEST nvmf_vfio_user_nvme_compliance 00:16:10.878 ************************************ 00:16:10.878 17:08:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:11.135 * Looking for test storage... 00:16:11.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:11.135 17:08:27 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.135 17:08:27 -- nvmf/common.sh@7 -- # uname -s 00:16:11.135 17:08:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.135 17:08:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.135 17:08:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.135 17:08:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.135 17:08:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.135 17:08:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.135 17:08:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.135 17:08:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.135 17:08:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.135 17:08:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.135 17:08:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.135 17:08:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.135 17:08:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.135 17:08:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.135 17:08:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.135 17:08:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.135 17:08:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.135 17:08:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.135 17:08:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.135 17:08:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.135 17:08:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.135 17:08:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.135 17:08:27 -- paths/export.sh@5 -- # export PATH 00:16:11.135 17:08:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.135 17:08:27 -- nvmf/common.sh@46 -- # : 0 00:16:11.135 17:08:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:11.135 17:08:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:11.135 17:08:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:11.135 17:08:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.135 17:08:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.135 17:08:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:11.135 17:08:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:11.135 17:08:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:11.135 17:08:27 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.135 17:08:27 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.135 17:08:27 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:11.135 17:08:27 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:11.135 17:08:27 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:11.135 17:08:27 -- compliance/compliance.sh@20 -- # nvmfpid=518438 00:16:11.135 17:08:27 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:11.135 17:08:27 -- compliance/compliance.sh@21 -- # echo 'Process pid: 518438' 00:16:11.135 Process pid: 518438 00:16:11.135 17:08:27 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.135 17:08:27 -- compliance/compliance.sh@24 -- # waitforlisten 518438 00:16:11.135 17:08:27 -- common/autotest_common.sh@819 -- # '[' -z 518438 ']' 00:16:11.135 17:08:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.135 17:08:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:11.135 17:08:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.135 17:08:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:11.135 17:08:27 -- common/autotest_common.sh@10 -- # set +x 00:16:11.135 [2024-07-20 17:08:27.127227] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:11.135 [2024-07-20 17:08:27.127327] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.135 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.135 [2024-07-20 17:08:27.188217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.135 [2024-07-20 17:08:27.269853] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:11.135 [2024-07-20 17:08:27.270010] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.135 [2024-07-20 17:08:27.270027] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.135 [2024-07-20 17:08:27.270039] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.135 [2024-07-20 17:08:27.270183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.135 [2024-07-20 17:08:27.270251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.135 [2024-07-20 17:08:27.270254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.065 17:08:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:12.065 17:08:28 -- common/autotest_common.sh@852 -- # return 0 00:16:12.065 17:08:28 -- compliance/compliance.sh@26 -- # sleep 1 00:16:12.998 17:08:29 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:12.998 17:08:29 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:12.998 17:08:29 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.998 17:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.998 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:16:12.998 17:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.998 17:08:29 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:12.998 17:08:29 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.998 17:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.998 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:16:12.998 malloc0 00:16:12.998 17:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.998 17:08:29 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:12.998 17:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.998 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:16:12.998 17:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.998 17:08:29 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.998 17:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.998 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:16:12.998 17:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.998 17:08:29 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.998 17:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.998 17:08:29 -- common/autotest_common.sh@10 -- # set +x 00:16:12.998 17:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.998 17:08:29 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:13.255 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.255 00:16:13.255 00:16:13.255 CUnit - A unit testing framework for C - Version 2.1-3 00:16:13.255 http://cunit.sourceforge.net/ 00:16:13.255 00:16:13.255 00:16:13.255 Suite: nvme_compliance 00:16:13.255 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-20 17:08:29.267917] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:13.255 [2024-07-20 17:08:29.267963] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:13.255 [2024-07-20 17:08:29.267991] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:13.255 passed 00:16:13.255 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:13.511 Test: admin_identify_ns ...[2024-07-20 17:08:29.507815] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:13.511 [2024-07-20 17:08:29.515824] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:13.511 passed 00:16:13.511 Test: admin_get_features_mandatory_features ...passed 00:16:13.768 Test: admin_get_features_optional_features ...passed 00:16:14.026 Test: admin_set_features_number_of_queues ...passed 00:16:14.026 Test: admin_get_log_page_mandatory_logs ...passed 00:16:14.026 Test: admin_get_log_page_with_lpo ...[2024-07-20 17:08:30.141825] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:14.284 passed 00:16:14.284 Test: fabric_property_get ...passed 00:16:14.284 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-20 17:08:30.324977] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:14.284 passed 00:16:14.541 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-20 17:08:30.495807] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.541 [2024-07-20 17:08:30.511803] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.541 passed 00:16:14.541 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-20 17:08:30.602052] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:14.541 passed 00:16:14.799 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-20 17:08:30.765819] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:14.799 [2024-07-20 17:08:30.789818] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.799 passed 00:16:14.799 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-20 17:08:30.879232] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:14.799 [2024-07-20 17:08:30.879300] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:14.799 passed 00:16:15.087 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-20 17:08:31.055816] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:15.087 [2024-07-20 17:08:31.063799] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:15.087 [2024-07-20 17:08:31.071817] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:15.087 [2024-07-20 17:08:31.079799] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:15.087 passed 00:16:15.087 Test: admin_create_io_sq_verify_pc ...[2024-07-20 17:08:31.210818] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:15.351 passed 00:16:16.285 Test: admin_create_io_qp_max_qps ...[2024-07-20 17:08:32.407825] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:16.851 passed 00:16:17.107 Test: admin_create_io_sq_shared_cq ...[2024-07-20 17:08:33.012817] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:17.107 passed 00:16:17.107 00:16:17.107 Run Summary: Type Total Ran Passed Failed Inactive 00:16:17.107 suites 1 1 n/a 0 0 00:16:17.107 tests 18 18 18 0 0 00:16:17.107 asserts 360 360 360 0 n/a 00:16:17.107 00:16:17.107 Elapsed time = 1.567 seconds 00:16:17.107 17:08:33 -- compliance/compliance.sh@42 -- # killprocess 518438 00:16:17.107 17:08:33 -- common/autotest_common.sh@926 -- # '[' -z 518438 ']' 00:16:17.107 17:08:33 -- common/autotest_common.sh@930 -- # kill -0 518438 00:16:17.107 17:08:33 -- common/autotest_common.sh@931 -- # uname 00:16:17.107 17:08:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:17.107 17:08:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 518438 00:16:17.107 17:08:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:17.107 17:08:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:17.107 17:08:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 518438' 00:16:17.107 killing process with pid 518438 00:16:17.107 17:08:33 -- common/autotest_common.sh@945 -- # kill 518438 00:16:17.107 17:08:33 -- common/autotest_common.sh@950 -- # wait 518438 00:16:17.365 17:08:33 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:17.365 17:08:33 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:17.365 00:16:17.365 real 0m6.337s 00:16:17.365 user 0m18.174s 00:16:17.365 sys 0m0.578s 00:16:17.365 17:08:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.365 17:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 ************************************ 00:16:17.365 END TEST nvmf_vfio_user_nvme_compliance 00:16:17.365 ************************************ 00:16:17.365 17:08:33 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:17.365 17:08:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:17.365 17:08:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.365 17:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:17.365 ************************************ 00:16:17.365 START TEST nvmf_vfio_user_fuzz 00:16:17.365 ************************************ 00:16:17.365 17:08:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:17.365 * Looking for test storage... 00:16:17.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.365 17:08:33 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.365 17:08:33 -- nvmf/common.sh@7 -- # uname -s 00:16:17.365 17:08:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.365 17:08:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.365 17:08:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.365 17:08:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.365 17:08:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.365 17:08:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.365 17:08:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.365 17:08:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.365 17:08:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.365 17:08:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.365 17:08:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.365 17:08:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.365 17:08:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.365 17:08:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.365 17:08:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.365 17:08:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.365 17:08:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.365 17:08:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.365 17:08:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.366 17:08:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.366 17:08:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.366 17:08:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.366 17:08:33 -- paths/export.sh@5 -- # export PATH 00:16:17.366 17:08:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.366 17:08:33 -- nvmf/common.sh@46 -- # : 0 00:16:17.366 17:08:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:17.366 17:08:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:17.366 17:08:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:17.366 17:08:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.366 17:08:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.366 17:08:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:17.366 17:08:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:17.366 17:08:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=519287 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 519287' 00:16:17.366 Process pid: 519287 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:17.366 17:08:33 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 519287 00:16:17.366 17:08:33 -- common/autotest_common.sh@819 -- # '[' -z 519287 ']' 00:16:17.366 17:08:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.366 17:08:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.366 17:08:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.366 17:08:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.366 17:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:18.297 17:08:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.297 17:08:34 -- common/autotest_common.sh@852 -- # return 0 00:16:18.297 17:08:34 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:19.666 17:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.666 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.666 17:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:19.666 17:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.666 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.666 malloc0 00:16:19.666 17:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:19.666 17:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.666 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.666 17:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:19.666 17:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.666 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.666 17:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:19.666 17:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.666 17:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:19.666 17:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:19.666 17:08:35 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:51.727 Fuzzing completed. Shutting down the fuzz application 00:16:51.727 00:16:51.727 Dumping successful admin opcodes: 00:16:51.727 8, 9, 10, 24, 00:16:51.727 Dumping successful io opcodes: 00:16:51.727 0, 00:16:51.727 NS: 0x200003a1ef00 I/O qp, Total commands completed: 559152, total successful commands: 2153, random_seed: 725814848 00:16:51.727 NS: 0x200003a1ef00 admin qp, Total commands completed: 138761, total successful commands: 1124, random_seed: 1964890816 00:16:51.727 17:09:05 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:51.727 17:09:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.727 17:09:05 -- common/autotest_common.sh@10 -- # set +x 00:16:51.727 17:09:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.727 17:09:05 -- target/vfio_user_fuzz.sh@46 -- # killprocess 519287 00:16:51.727 17:09:05 -- common/autotest_common.sh@926 -- # '[' -z 519287 ']' 00:16:51.727 17:09:05 -- common/autotest_common.sh@930 -- # kill -0 519287 00:16:51.727 17:09:05 -- common/autotest_common.sh@931 -- # uname 00:16:51.727 17:09:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.727 17:09:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 519287 00:16:51.727 17:09:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:51.727 17:09:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:51.727 17:09:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 519287' 00:16:51.727 killing process with pid 519287 00:16:51.727 17:09:05 -- common/autotest_common.sh@945 -- # kill 519287 00:16:51.727 17:09:05 -- common/autotest_common.sh@950 -- # wait 519287 00:16:51.727 17:09:06 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:51.727 17:09:06 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:51.727 00:16:51.727 real 0m32.915s 00:16:51.727 user 0m34.389s 00:16:51.727 sys 0m25.835s 00:16:51.727 17:09:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.727 17:09:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.727 ************************************ 00:16:51.727 END TEST nvmf_vfio_user_fuzz 00:16:51.727 ************************************ 00:16:51.727 17:09:06 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:51.727 17:09:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:51.727 17:09:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.727 17:09:06 -- common/autotest_common.sh@10 -- # set +x 00:16:51.727 ************************************ 00:16:51.727 START TEST nvmf_host_management 00:16:51.727 ************************************ 00:16:51.727 17:09:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:51.727 * Looking for test storage... 00:16:51.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.727 17:09:06 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.727 17:09:06 -- nvmf/common.sh@7 -- # uname -s 00:16:51.727 17:09:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.727 17:09:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.727 17:09:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.727 17:09:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.727 17:09:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.727 17:09:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.728 17:09:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.728 17:09:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.728 17:09:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.728 17:09:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.728 17:09:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.728 17:09:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.728 17:09:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.728 17:09:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.728 17:09:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.728 17:09:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.728 17:09:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.728 17:09:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.728 17:09:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.728 17:09:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.728 17:09:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.728 17:09:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.728 17:09:06 -- paths/export.sh@5 -- # export PATH 00:16:51.728 17:09:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.728 17:09:06 -- nvmf/common.sh@46 -- # : 0 00:16:51.728 17:09:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:51.728 17:09:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:51.728 17:09:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:51.728 17:09:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.728 17:09:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.728 17:09:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:51.728 17:09:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:51.728 17:09:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:51.728 17:09:06 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.728 17:09:06 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.728 17:09:06 -- target/host_management.sh@104 -- # nvmftestinit 00:16:51.728 17:09:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:51.728 17:09:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.728 17:09:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:51.728 17:09:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:51.728 17:09:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:51.728 17:09:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.728 17:09:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.728 17:09:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.728 17:09:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:51.728 17:09:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:51.728 17:09:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:51.728 17:09:06 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 17:09:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:52.294 17:09:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:52.294 17:09:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:52.294 17:09:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:52.294 17:09:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:52.294 17:09:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:52.294 17:09:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:52.294 17:09:08 -- nvmf/common.sh@294 -- # net_devs=() 00:16:52.294 17:09:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:52.294 17:09:08 -- nvmf/common.sh@295 -- # e810=() 00:16:52.294 17:09:08 -- nvmf/common.sh@295 -- # local -ga e810 00:16:52.294 17:09:08 -- nvmf/common.sh@296 -- # x722=() 00:16:52.294 17:09:08 -- nvmf/common.sh@296 -- # local -ga x722 00:16:52.294 17:09:08 -- nvmf/common.sh@297 -- # mlx=() 00:16:52.294 17:09:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:52.294 17:09:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.294 17:09:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:52.294 17:09:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:52.294 17:09:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:52.294 17:09:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.294 17:09:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.294 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.294 17:09:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.294 17:09:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.294 17:09:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:52.294 17:09:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.294 17:09:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.294 17:09:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.294 17:09:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.294 17:09:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.294 17:09:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.294 17:09:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.294 17:09:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.294 17:09:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.294 17:09:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.294 17:09:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.294 17:09:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.294 17:09:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:52.294 17:09:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:52.294 17:09:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:52.294 17:09:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.294 17:09:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.294 17:09:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.294 17:09:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:52.294 17:09:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.294 17:09:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.294 17:09:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:52.294 17:09:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.294 17:09:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.294 17:09:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:52.294 17:09:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:52.294 17:09:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.294 17:09:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.294 17:09:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.294 17:09:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.294 17:09:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:52.294 17:09:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.294 17:09:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.294 17:09:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.294 17:09:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:52.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:16:52.294 00:16:52.294 --- 10.0.0.2 ping statistics --- 00:16:52.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.294 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:52.294 17:09:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:16:52.294 00:16:52.294 --- 10.0.0.1 ping statistics --- 00:16:52.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.294 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:16:52.294 17:09:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.294 17:09:08 -- nvmf/common.sh@410 -- # return 0 00:16:52.294 17:09:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:52.294 17:09:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.294 17:09:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:52.294 17:09:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.294 17:09:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:52.294 17:09:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:52.294 17:09:08 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:52.294 17:09:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:52.294 17:09:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:52.294 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.294 ************************************ 00:16:52.294 START TEST nvmf_host_management 00:16:52.294 ************************************ 00:16:52.294 17:09:08 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:52.294 17:09:08 -- target/host_management.sh@69 -- # starttarget 00:16:52.294 17:09:08 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:52.294 17:09:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:52.294 17:09:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:52.294 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.295 17:09:08 -- nvmf/common.sh@469 -- # nvmfpid=525337 00:16:52.295 17:09:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:52.295 17:09:08 -- nvmf/common.sh@470 -- # waitforlisten 525337 00:16:52.295 17:09:08 -- common/autotest_common.sh@819 -- # '[' -z 525337 ']' 00:16:52.295 17:09:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.295 17:09:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.295 17:09:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.295 17:09:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.295 17:09:08 -- common/autotest_common.sh@10 -- # set +x 00:16:52.552 [2024-07-20 17:09:08.454218] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:52.552 [2024-07-20 17:09:08.454313] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.552 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.552 [2024-07-20 17:09:08.527044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.552 [2024-07-20 17:09:08.613677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:52.552 [2024-07-20 17:09:08.613856] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.552 [2024-07-20 17:09:08.613874] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.552 [2024-07-20 17:09:08.613887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.552 [2024-07-20 17:09:08.613973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.552 [2024-07-20 17:09:08.614068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.552 [2024-07-20 17:09:08.614185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.552 [2024-07-20 17:09:08.614187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.484 17:09:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.484 17:09:09 -- common/autotest_common.sh@852 -- # return 0 00:16:53.484 17:09:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:53.484 17:09:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:53.484 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 17:09:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.484 17:09:09 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.484 17:09:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.484 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 [2024-07-20 17:09:09.418377] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.484 17:09:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.484 17:09:09 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:53.484 17:09:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:53.484 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 17:09:09 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:53.484 17:09:09 -- target/host_management.sh@23 -- # cat 00:16:53.484 17:09:09 -- target/host_management.sh@30 -- # rpc_cmd 00:16:53.484 17:09:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.484 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 Malloc0 00:16:53.484 [2024-07-20 17:09:09.479229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.484 17:09:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.484 17:09:09 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:53.484 17:09:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:53.484 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.484 17:09:09 -- target/host_management.sh@73 -- # perfpid=525654 00:16:53.484 17:09:09 -- target/host_management.sh@74 -- # waitforlisten 525654 /var/tmp/bdevperf.sock 00:16:53.484 17:09:09 -- common/autotest_common.sh@819 -- # '[' -z 525654 ']' 00:16:53.484 17:09:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.484 17:09:09 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:53.484 17:09:09 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:53.484 17:09:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.484 17:09:09 -- nvmf/common.sh@520 -- # config=() 00:16:53.484 17:09:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.485 17:09:09 -- nvmf/common.sh@520 -- # local subsystem config 00:16:53.485 17:09:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.485 17:09:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:53.485 17:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:53.485 17:09:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:53.485 { 00:16:53.485 "params": { 00:16:53.485 "name": "Nvme$subsystem", 00:16:53.485 "trtype": "$TEST_TRANSPORT", 00:16:53.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.485 "adrfam": "ipv4", 00:16:53.485 "trsvcid": "$NVMF_PORT", 00:16:53.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.485 "hdgst": ${hdgst:-false}, 00:16:53.485 "ddgst": ${ddgst:-false} 00:16:53.485 }, 00:16:53.485 "method": "bdev_nvme_attach_controller" 00:16:53.485 } 00:16:53.485 EOF 00:16:53.485 )") 00:16:53.485 17:09:09 -- nvmf/common.sh@542 -- # cat 00:16:53.485 17:09:09 -- nvmf/common.sh@544 -- # jq . 00:16:53.485 17:09:09 -- nvmf/common.sh@545 -- # IFS=, 00:16:53.485 17:09:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:53.485 "params": { 00:16:53.485 "name": "Nvme0", 00:16:53.485 "trtype": "tcp", 00:16:53.485 "traddr": "10.0.0.2", 00:16:53.485 "adrfam": "ipv4", 00:16:53.485 "trsvcid": "4420", 00:16:53.485 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.485 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:53.485 "hdgst": false, 00:16:53.485 "ddgst": false 00:16:53.485 }, 00:16:53.485 "method": "bdev_nvme_attach_controller" 00:16:53.485 }' 00:16:53.485 [2024-07-20 17:09:09.556875] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:53.485 [2024-07-20 17:09:09.556948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525654 ] 00:16:53.485 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.485 [2024-07-20 17:09:09.621980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.742 [2024-07-20 17:09:09.707825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.999 Running I/O for 10 seconds... 00:16:54.567 17:09:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.567 17:09:10 -- common/autotest_common.sh@852 -- # return 0 00:16:54.567 17:09:10 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:54.567 17:09:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.567 17:09:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.567 17:09:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.567 17:09:10 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:54.567 17:09:10 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:54.567 17:09:10 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:54.567 17:09:10 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:54.567 17:09:10 -- target/host_management.sh@52 -- # local ret=1 00:16:54.567 17:09:10 -- target/host_management.sh@53 -- # local i 00:16:54.567 17:09:10 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:54.567 17:09:10 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:54.567 17:09:10 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:54.567 17:09:10 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:54.567 17:09:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.567 17:09:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.567 17:09:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.567 17:09:10 -- target/host_management.sh@55 -- # read_io_count=1060 00:16:54.567 17:09:10 -- target/host_management.sh@58 -- # '[' 1060 -ge 100 ']' 00:16:54.567 17:09:10 -- target/host_management.sh@59 -- # ret=0 00:16:54.567 17:09:10 -- target/host_management.sh@60 -- # break 00:16:54.567 17:09:10 -- target/host_management.sh@64 -- # return 0 00:16:54.567 17:09:10 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:54.567 17:09:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.567 17:09:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.567 [2024-07-20 17:09:10.542967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543426] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.543637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97eaf0 is same with the state(5) to be set 00:16:54.567 [2024-07-20 17:09:10.544957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.544998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.567 [2024-07-20 17:09:10.545268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.567 [2024-07-20 17:09:10.545282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.545976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.545993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.568 [2024-07-20 17:09:10.546629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.568 [2024-07-20 17:09:10.546646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.546987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.569 [2024-07-20 17:09:10.547226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.569 [2024-07-20 17:09:10.547337] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1221c00 was disconnected and freed. reset controller. 00:16:54.569 17:09:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.569 17:09:10 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:54.569 17:09:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.569 17:09:10 -- common/autotest_common.sh@10 -- # set +x 00:16:54.569 [2024-07-20 17:09:10.548488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:54.569 task offset: 16000 on job bdev=Nvme0n1 fails 00:16:54.569 00:16:54.569 Latency(us) 00:16:54.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.569 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.569 Job: Nvme0n1 ended in about 0.62 seconds with error 00:16:54.569 Verification LBA range: start 0x0 length 0x400 00:16:54.569 Nvme0n1 : 0.62 1811.03 113.19 102.84 0.00 33082.04 2730.67 45438.29 00:16:54.569 =================================================================================================================== 00:16:54.569 Total : 1811.03 113.19 102.84 0.00 33082.04 2730.67 45438.29 00:16:54.569 [2024-07-20 17:09:10.550471] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:54.569 [2024-07-20 17:09:10.550503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1224030 (9): Bad file descriptor 00:16:54.569 17:09:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.569 17:09:10 -- target/host_management.sh@87 -- # sleep 1 00:16:54.569 [2024-07-20 17:09:10.598284] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:55.504 17:09:11 -- target/host_management.sh@91 -- # kill -9 525654 00:16:55.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (525654) - No such process 00:16:55.504 17:09:11 -- target/host_management.sh@91 -- # true 00:16:55.504 17:09:11 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:55.504 17:09:11 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:55.504 17:09:11 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:55.504 17:09:11 -- nvmf/common.sh@520 -- # config=() 00:16:55.504 17:09:11 -- nvmf/common.sh@520 -- # local subsystem config 00:16:55.504 17:09:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:55.504 17:09:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:55.504 { 00:16:55.504 "params": { 00:16:55.504 "name": "Nvme$subsystem", 00:16:55.504 "trtype": "$TEST_TRANSPORT", 00:16:55.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.504 "adrfam": "ipv4", 00:16:55.504 "trsvcid": "$NVMF_PORT", 00:16:55.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.504 "hdgst": ${hdgst:-false}, 00:16:55.504 "ddgst": ${ddgst:-false} 00:16:55.504 }, 00:16:55.504 "method": "bdev_nvme_attach_controller" 00:16:55.504 } 00:16:55.504 EOF 00:16:55.504 )") 00:16:55.504 17:09:11 -- nvmf/common.sh@542 -- # cat 00:16:55.504 17:09:11 -- nvmf/common.sh@544 -- # jq . 00:16:55.504 17:09:11 -- nvmf/common.sh@545 -- # IFS=, 00:16:55.504 17:09:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:55.504 "params": { 00:16:55.504 "name": "Nvme0", 00:16:55.504 "trtype": "tcp", 00:16:55.504 "traddr": "10.0.0.2", 00:16:55.504 "adrfam": "ipv4", 00:16:55.504 "trsvcid": "4420", 00:16:55.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:55.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:55.504 "hdgst": false, 00:16:55.504 "ddgst": false 00:16:55.504 }, 00:16:55.504 "method": "bdev_nvme_attach_controller" 00:16:55.504 }' 00:16:55.504 [2024-07-20 17:09:11.598090] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:55.504 [2024-07-20 17:09:11.598190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525939 ] 00:16:55.505 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.505 [2024-07-20 17:09:11.659948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.763 [2024-07-20 17:09:11.744531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.022 Running I/O for 1 seconds... 00:16:56.956 00:16:56.956 Latency(us) 00:16:56.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.956 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.956 Verification LBA range: start 0x0 length 0x400 00:16:56.956 Nvme0n1 : 1.02 1537.91 96.12 0.00 0.00 41051.42 4611.79 53593.88 00:16:56.957 =================================================================================================================== 00:16:56.957 Total : 1537.91 96.12 0.00 0.00 41051.42 4611.79 53593.88 00:16:57.214 17:09:13 -- target/host_management.sh@101 -- # stoptarget 00:16:57.214 17:09:13 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:57.214 17:09:13 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:57.214 17:09:13 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:57.214 17:09:13 -- target/host_management.sh@40 -- # nvmftestfini 00:16:57.214 17:09:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:57.214 17:09:13 -- nvmf/common.sh@116 -- # sync 00:16:57.214 17:09:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:57.214 17:09:13 -- nvmf/common.sh@119 -- # set +e 00:16:57.214 17:09:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:57.214 17:09:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:57.214 rmmod nvme_tcp 00:16:57.214 rmmod nvme_fabrics 00:16:57.214 rmmod nvme_keyring 00:16:57.214 17:09:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:57.214 17:09:13 -- nvmf/common.sh@123 -- # set -e 00:16:57.214 17:09:13 -- nvmf/common.sh@124 -- # return 0 00:16:57.214 17:09:13 -- nvmf/common.sh@477 -- # '[' -n 525337 ']' 00:16:57.214 17:09:13 -- nvmf/common.sh@478 -- # killprocess 525337 00:16:57.214 17:09:13 -- common/autotest_common.sh@926 -- # '[' -z 525337 ']' 00:16:57.214 17:09:13 -- common/autotest_common.sh@930 -- # kill -0 525337 00:16:57.214 17:09:13 -- common/autotest_common.sh@931 -- # uname 00:16:57.214 17:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.214 17:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 525337 00:16:57.214 17:09:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:57.214 17:09:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:57.214 17:09:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 525337' 00:16:57.214 killing process with pid 525337 00:16:57.214 17:09:13 -- common/autotest_common.sh@945 -- # kill 525337 00:16:57.214 17:09:13 -- common/autotest_common.sh@950 -- # wait 525337 00:16:57.472 [2024-07-20 17:09:13.525374] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:57.472 17:09:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:57.472 17:09:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:57.472 17:09:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:57.472 17:09:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.472 17:09:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:57.472 17:09:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.472 17:09:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.472 17:09:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.005 17:09:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:00.005 00:17:00.005 real 0m7.199s 00:17:00.005 user 0m21.697s 00:17:00.005 sys 0m1.356s 00:17:00.005 17:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.005 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:17:00.005 ************************************ 00:17:00.005 END TEST nvmf_host_management 00:17:00.005 ************************************ 00:17:00.005 17:09:15 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:00.005 00:17:00.005 real 0m9.301s 00:17:00.005 user 0m22.390s 00:17:00.005 sys 0m2.778s 00:17:00.005 17:09:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.005 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:17:00.005 ************************************ 00:17:00.005 END TEST nvmf_host_management 00:17:00.005 ************************************ 00:17:00.005 17:09:15 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:00.005 17:09:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:00.005 17:09:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:00.005 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:17:00.005 ************************************ 00:17:00.005 START TEST nvmf_lvol 00:17:00.005 ************************************ 00:17:00.005 17:09:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:00.005 * Looking for test storage... 00:17:00.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.005 17:09:15 -- nvmf/common.sh@7 -- # uname -s 00:17:00.005 17:09:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.005 17:09:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.005 17:09:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.005 17:09:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.005 17:09:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.005 17:09:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.005 17:09:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.005 17:09:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.005 17:09:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.005 17:09:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.005 17:09:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.005 17:09:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.005 17:09:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.005 17:09:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.005 17:09:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.005 17:09:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.005 17:09:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.005 17:09:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.005 17:09:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.005 17:09:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.005 17:09:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.005 17:09:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.005 17:09:15 -- paths/export.sh@5 -- # export PATH 00:17:00.005 17:09:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.005 17:09:15 -- nvmf/common.sh@46 -- # : 0 00:17:00.005 17:09:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:00.005 17:09:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:00.005 17:09:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:00.005 17:09:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.005 17:09:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.005 17:09:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:00.005 17:09:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:00.005 17:09:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.005 17:09:15 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:00.005 17:09:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:00.005 17:09:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.005 17:09:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:00.005 17:09:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:00.005 17:09:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:00.005 17:09:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.005 17:09:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.005 17:09:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.005 17:09:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:00.005 17:09:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:00.005 17:09:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:00.005 17:09:15 -- common/autotest_common.sh@10 -- # set +x 00:17:01.945 17:09:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.945 17:09:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:01.945 17:09:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:01.945 17:09:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:01.945 17:09:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:01.945 17:09:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:01.945 17:09:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:01.945 17:09:17 -- nvmf/common.sh@294 -- # net_devs=() 00:17:01.945 17:09:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:01.945 17:09:17 -- nvmf/common.sh@295 -- # e810=() 00:17:01.945 17:09:17 -- nvmf/common.sh@295 -- # local -ga e810 00:17:01.945 17:09:17 -- nvmf/common.sh@296 -- # x722=() 00:17:01.945 17:09:17 -- nvmf/common.sh@296 -- # local -ga x722 00:17:01.945 17:09:17 -- nvmf/common.sh@297 -- # mlx=() 00:17:01.945 17:09:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:01.945 17:09:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.945 17:09:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:01.945 17:09:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:01.945 17:09:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:01.945 17:09:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.945 17:09:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.945 17:09:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.945 17:09:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.945 17:09:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:01.945 17:09:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.945 17:09:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.945 17:09:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.945 17:09:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.945 17:09:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.945 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.945 17:09:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.945 17:09:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.945 17:09:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.945 17:09:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.945 17:09:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.945 17:09:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.945 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.945 17:09:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.945 17:09:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:01.945 17:09:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:01.945 17:09:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:01.945 17:09:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.945 17:09:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.945 17:09:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.945 17:09:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:01.945 17:09:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.945 17:09:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.945 17:09:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:01.945 17:09:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.945 17:09:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.945 17:09:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:01.945 17:09:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:01.945 17:09:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.945 17:09:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.945 17:09:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.945 17:09:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.945 17:09:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:01.945 17:09:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.945 17:09:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.945 17:09:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.945 17:09:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:01.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:17:01.945 00:17:01.945 --- 10.0.0.2 ping statistics --- 00:17:01.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.945 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:17:01.945 17:09:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:17:01.945 00:17:01.945 --- 10.0.0.1 ping statistics --- 00:17:01.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.945 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:01.945 17:09:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.945 17:09:17 -- nvmf/common.sh@410 -- # return 0 00:17:01.945 17:09:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:01.945 17:09:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.945 17:09:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:01.945 17:09:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.945 17:09:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:01.945 17:09:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:01.945 17:09:17 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:01.945 17:09:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:01.945 17:09:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:01.945 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:17:01.945 17:09:17 -- nvmf/common.sh@469 -- # nvmfpid=528167 00:17:01.945 17:09:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:01.945 17:09:17 -- nvmf/common.sh@470 -- # waitforlisten 528167 00:17:01.945 17:09:17 -- common/autotest_common.sh@819 -- # '[' -z 528167 ']' 00:17:01.945 17:09:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.945 17:09:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.945 17:09:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.946 17:09:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.946 17:09:17 -- common/autotest_common.sh@10 -- # set +x 00:17:01.946 [2024-07-20 17:09:17.937145] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:01.946 [2024-07-20 17:09:17.937224] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.946 [2024-07-20 17:09:18.006810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.946 [2024-07-20 17:09:18.094519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.946 [2024-07-20 17:09:18.094706] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.946 [2024-07-20 17:09:18.094728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.946 [2024-07-20 17:09:18.094746] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.946 [2024-07-20 17:09:18.094855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.946 [2024-07-20 17:09:18.094925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.946 [2024-07-20 17:09:18.094928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.878 17:09:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.878 17:09:18 -- common/autotest_common.sh@852 -- # return 0 00:17:02.878 17:09:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:02.878 17:09:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:02.878 17:09:18 -- common/autotest_common.sh@10 -- # set +x 00:17:02.878 17:09:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.878 17:09:18 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:03.144 [2024-07-20 17:09:19.094824] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.144 17:09:19 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.401 17:09:19 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:03.401 17:09:19 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.658 17:09:19 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:03.658 17:09:19 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:03.915 17:09:19 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:04.173 17:09:20 -- target/nvmf_lvol.sh@29 -- # lvs=3a4c79d4-eba8-4e97-a7ba-11e908352e82 00:17:04.173 17:09:20 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3a4c79d4-eba8-4e97-a7ba-11e908352e82 lvol 20 00:17:04.430 17:09:20 -- target/nvmf_lvol.sh@32 -- # lvol=36dd1c42-3e2e-40e1-8394-b209d666e5b8 00:17:04.430 17:09:20 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:04.687 17:09:20 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36dd1c42-3e2e-40e1-8394-b209d666e5b8 00:17:04.944 17:09:20 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:04.945 [2024-07-20 17:09:21.080146] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.945 17:09:21 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.202 17:09:21 -- target/nvmf_lvol.sh@42 -- # perf_pid=528614 00:17:05.202 17:09:21 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:05.202 17:09:21 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:05.460 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.395 17:09:22 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 36dd1c42-3e2e-40e1-8394-b209d666e5b8 MY_SNAPSHOT 00:17:06.653 17:09:22 -- target/nvmf_lvol.sh@47 -- # snapshot=ad1e8d8f-2f19-4a1e-bab3-62b075728372 00:17:06.653 17:09:22 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 36dd1c42-3e2e-40e1-8394-b209d666e5b8 30 00:17:06.911 17:09:22 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ad1e8d8f-2f19-4a1e-bab3-62b075728372 MY_CLONE 00:17:07.168 17:09:23 -- target/nvmf_lvol.sh@49 -- # clone=5f01ed1c-3260-4adb-84c4-091c6e2c989f 00:17:07.168 17:09:23 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5f01ed1c-3260-4adb-84c4-091c6e2c989f 00:17:07.425 17:09:23 -- target/nvmf_lvol.sh@53 -- # wait 528614 00:17:17.385 Initializing NVMe Controllers 00:17:17.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:17.385 Controller IO queue size 128, less than required. 00:17:17.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:17.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:17.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:17.385 Initialization complete. Launching workers. 00:17:17.385 ======================================================== 00:17:17.385 Latency(us) 00:17:17.385 Device Information : IOPS MiB/s Average min max 00:17:17.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11434.99 44.67 11197.97 1831.23 88448.66 00:17:17.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11340.99 44.30 11290.24 1863.72 63898.96 00:17:17.385 ======================================================== 00:17:17.385 Total : 22775.99 88.97 11243.92 1831.23 88448.66 00:17:17.385 00:17:17.385 17:09:31 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:17.385 17:09:32 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36dd1c42-3e2e-40e1-8394-b209d666e5b8 00:17:17.385 17:09:32 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3a4c79d4-eba8-4e97-a7ba-11e908352e82 00:17:17.385 17:09:32 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:17.385 17:09:32 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:17.385 17:09:32 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:17.385 17:09:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:17.385 17:09:32 -- nvmf/common.sh@116 -- # sync 00:17:17.385 17:09:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:17.385 17:09:32 -- nvmf/common.sh@119 -- # set +e 00:17:17.385 17:09:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:17.385 17:09:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:17.385 rmmod nvme_tcp 00:17:17.385 rmmod nvme_fabrics 00:17:17.385 rmmod nvme_keyring 00:17:17.385 17:09:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:17.385 17:09:32 -- nvmf/common.sh@123 -- # set -e 00:17:17.385 17:09:32 -- nvmf/common.sh@124 -- # return 0 00:17:17.385 17:09:32 -- nvmf/common.sh@477 -- # '[' -n 528167 ']' 00:17:17.385 17:09:32 -- nvmf/common.sh@478 -- # killprocess 528167 00:17:17.385 17:09:32 -- common/autotest_common.sh@926 -- # '[' -z 528167 ']' 00:17:17.385 17:09:32 -- common/autotest_common.sh@930 -- # kill -0 528167 00:17:17.385 17:09:32 -- common/autotest_common.sh@931 -- # uname 00:17:17.385 17:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.385 17:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 528167 00:17:17.385 17:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:17.385 17:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:17.385 17:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 528167' 00:17:17.385 killing process with pid 528167 00:17:17.385 17:09:32 -- common/autotest_common.sh@945 -- # kill 528167 00:17:17.385 17:09:32 -- common/autotest_common.sh@950 -- # wait 528167 00:17:17.385 17:09:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:17.385 17:09:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:17.385 17:09:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:17.385 17:09:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.385 17:09:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:17.385 17:09:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.385 17:09:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.385 17:09:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.773 17:09:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:18.773 00:17:18.773 real 0m19.263s 00:17:18.773 user 1m5.822s 00:17:18.773 sys 0m5.466s 00:17:18.773 17:09:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.773 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:18.774 ************************************ 00:17:18.774 END TEST nvmf_lvol 00:17:18.774 ************************************ 00:17:19.033 17:09:34 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:19.033 17:09:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:19.033 17:09:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:19.033 17:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:19.033 ************************************ 00:17:19.033 START TEST nvmf_lvs_grow 00:17:19.033 ************************************ 00:17:19.033 17:09:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:19.033 * Looking for test storage... 00:17:19.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.033 17:09:34 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.033 17:09:34 -- nvmf/common.sh@7 -- # uname -s 00:17:19.033 17:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.033 17:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.033 17:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.033 17:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.033 17:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.033 17:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.033 17:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.033 17:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.033 17:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.033 17:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.033 17:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.033 17:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.033 17:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.033 17:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.033 17:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.033 17:09:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.033 17:09:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.033 17:09:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.033 17:09:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.033 17:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.033 17:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.033 17:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.033 17:09:34 -- paths/export.sh@5 -- # export PATH 00:17:19.033 17:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.033 17:09:34 -- nvmf/common.sh@46 -- # : 0 00:17:19.033 17:09:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:19.033 17:09:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:19.033 17:09:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:19.033 17:09:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.033 17:09:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.033 17:09:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:19.033 17:09:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:19.033 17:09:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:19.033 17:09:35 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:19.033 17:09:35 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.033 17:09:35 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:19.033 17:09:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:19.033 17:09:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.033 17:09:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:19.033 17:09:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:19.033 17:09:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:19.033 17:09:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.033 17:09:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.033 17:09:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.033 17:09:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:19.033 17:09:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:19.033 17:09:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:19.033 17:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:20.971 17:09:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.971 17:09:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:20.971 17:09:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:20.971 17:09:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:20.971 17:09:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:20.971 17:09:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:20.971 17:09:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:20.971 17:09:36 -- nvmf/common.sh@294 -- # net_devs=() 00:17:20.971 17:09:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:20.971 17:09:36 -- nvmf/common.sh@295 -- # e810=() 00:17:20.971 17:09:36 -- nvmf/common.sh@295 -- # local -ga e810 00:17:20.971 17:09:36 -- nvmf/common.sh@296 -- # x722=() 00:17:20.971 17:09:36 -- nvmf/common.sh@296 -- # local -ga x722 00:17:20.971 17:09:36 -- nvmf/common.sh@297 -- # mlx=() 00:17:20.971 17:09:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:20.971 17:09:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.971 17:09:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:20.971 17:09:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:20.971 17:09:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:20.971 17:09:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.971 17:09:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:20.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:20.971 17:09:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.971 17:09:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:20.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:20.971 17:09:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:20.971 17:09:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.971 17:09:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.971 17:09:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.971 17:09:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.971 17:09:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:20.971 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:20.971 17:09:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.971 17:09:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.971 17:09:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.971 17:09:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.971 17:09:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.971 17:09:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:20.971 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:20.971 17:09:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.971 17:09:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:20.971 17:09:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:20.971 17:09:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:20.971 17:09:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:20.971 17:09:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.971 17:09:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.971 17:09:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.971 17:09:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:20.971 17:09:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.971 17:09:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.971 17:09:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:20.971 17:09:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.971 17:09:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.971 17:09:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:20.971 17:09:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:20.971 17:09:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.971 17:09:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.971 17:09:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.971 17:09:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.971 17:09:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:20.971 17:09:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.971 17:09:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.971 17:09:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.971 17:09:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:20.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:17:20.971 00:17:20.971 --- 10.0.0.2 ping statistics --- 00:17:20.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.971 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:17:20.971 17:09:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:17:20.971 00:17:20.971 --- 10.0.0.1 ping statistics --- 00:17:20.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.971 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:17:20.971 17:09:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.971 17:09:37 -- nvmf/common.sh@410 -- # return 0 00:17:20.971 17:09:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.971 17:09:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.971 17:09:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.971 17:09:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.972 17:09:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.972 17:09:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.972 17:09:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:21.230 17:09:37 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:21.230 17:09:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.230 17:09:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:21.230 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.230 17:09:37 -- nvmf/common.sh@469 -- # nvmfpid=531922 00:17:21.230 17:09:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:21.230 17:09:37 -- nvmf/common.sh@470 -- # waitforlisten 531922 00:17:21.230 17:09:37 -- common/autotest_common.sh@819 -- # '[' -z 531922 ']' 00:17:21.230 17:09:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.230 17:09:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.230 17:09:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.230 17:09:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.230 17:09:37 -- common/autotest_common.sh@10 -- # set +x 00:17:21.230 [2024-07-20 17:09:37.179992] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:21.230 [2024-07-20 17:09:37.180080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.230 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.230 [2024-07-20 17:09:37.249914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.230 [2024-07-20 17:09:37.336751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.230 [2024-07-20 17:09:37.336948] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.230 [2024-07-20 17:09:37.336969] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.230 [2024-07-20 17:09:37.336983] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.230 [2024-07-20 17:09:37.337024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.164 17:09:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.164 17:09:38 -- common/autotest_common.sh@852 -- # return 0 00:17:22.164 17:09:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.164 17:09:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:22.164 17:09:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.164 17:09:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.164 17:09:38 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:22.421 [2024-07-20 17:09:38.417054] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:22.421 17:09:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:22.421 17:09:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.421 17:09:38 -- common/autotest_common.sh@10 -- # set +x 00:17:22.421 ************************************ 00:17:22.421 START TEST lvs_grow_clean 00:17:22.421 ************************************ 00:17:22.421 17:09:38 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.421 17:09:38 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.679 17:09:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:22.679 17:09:38 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:22.937 17:09:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:22.937 17:09:38 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:22.937 17:09:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:23.195 17:09:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:23.195 17:09:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:23.195 17:09:39 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d8858fa8-d33f-4921-b1fa-e1550abd8372 lvol 150 00:17:23.453 17:09:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e 00:17:23.453 17:09:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.453 17:09:39 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:23.711 [2024-07-20 17:09:39.703935] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:23.711 [2024-07-20 17:09:39.704017] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:23.711 true 00:17:23.711 17:09:39 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:23.711 17:09:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:23.969 17:09:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:23.969 17:09:39 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:24.228 17:09:40 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e 00:17:24.486 17:09:40 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:24.744 [2024-07-20 17:09:40.650984] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.744 17:09:40 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:24.744 17:09:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=532374 00:17:25.002 17:09:40 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:25.002 17:09:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.002 17:09:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 532374 /var/tmp/bdevperf.sock 00:17:25.002 17:09:40 -- common/autotest_common.sh@819 -- # '[' -z 532374 ']' 00:17:25.002 17:09:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.002 17:09:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.002 17:09:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.002 17:09:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.002 17:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:25.002 [2024-07-20 17:09:40.941818] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.002 [2024-07-20 17:09:40.941901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532374 ] 00:17:25.002 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.002 [2024-07-20 17:09:41.004522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.002 [2024-07-20 17:09:41.093771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.934 17:09:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.934 17:09:41 -- common/autotest_common.sh@852 -- # return 0 00:17:25.935 17:09:41 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:26.192 Nvme0n1 00:17:26.192 17:09:42 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:26.449 [ 00:17:26.449 { 00:17:26.449 "name": "Nvme0n1", 00:17:26.449 "aliases": [ 00:17:26.449 "d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e" 00:17:26.449 ], 00:17:26.449 "product_name": "NVMe disk", 00:17:26.449 "block_size": 4096, 00:17:26.449 "num_blocks": 38912, 00:17:26.449 "uuid": "d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e", 00:17:26.449 "assigned_rate_limits": { 00:17:26.449 "rw_ios_per_sec": 0, 00:17:26.449 "rw_mbytes_per_sec": 0, 00:17:26.449 "r_mbytes_per_sec": 0, 00:17:26.449 "w_mbytes_per_sec": 0 00:17:26.449 }, 00:17:26.449 "claimed": false, 00:17:26.449 "zoned": false, 00:17:26.449 "supported_io_types": { 00:17:26.449 "read": true, 00:17:26.449 "write": true, 00:17:26.449 "unmap": true, 00:17:26.449 "write_zeroes": true, 00:17:26.449 "flush": true, 00:17:26.449 "reset": true, 00:17:26.449 "compare": true, 00:17:26.449 "compare_and_write": true, 00:17:26.449 "abort": true, 00:17:26.449 "nvme_admin": true, 00:17:26.449 "nvme_io": true 00:17:26.449 }, 00:17:26.449 "driver_specific": { 00:17:26.449 "nvme": [ 00:17:26.449 { 00:17:26.449 "trid": { 00:17:26.449 "trtype": "TCP", 00:17:26.449 "adrfam": "IPv4", 00:17:26.449 "traddr": "10.0.0.2", 00:17:26.449 "trsvcid": "4420", 00:17:26.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:26.449 }, 00:17:26.449 "ctrlr_data": { 00:17:26.449 "cntlid": 1, 00:17:26.449 "vendor_id": "0x8086", 00:17:26.449 "model_number": "SPDK bdev Controller", 00:17:26.449 "serial_number": "SPDK0", 00:17:26.449 "firmware_revision": "24.01.1", 00:17:26.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:26.449 "oacs": { 00:17:26.449 "security": 0, 00:17:26.449 "format": 0, 00:17:26.449 "firmware": 0, 00:17:26.449 "ns_manage": 0 00:17:26.449 }, 00:17:26.449 "multi_ctrlr": true, 00:17:26.449 "ana_reporting": false 00:17:26.449 }, 00:17:26.449 "vs": { 00:17:26.449 "nvme_version": "1.3" 00:17:26.449 }, 00:17:26.449 "ns_data": { 00:17:26.449 "id": 1, 00:17:26.449 "can_share": true 00:17:26.449 } 00:17:26.449 } 00:17:26.449 ], 00:17:26.449 "mp_policy": "active_passive" 00:17:26.449 } 00:17:26.449 } 00:17:26.449 ] 00:17:26.449 17:09:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=532652 00:17:26.449 17:09:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:26.449 17:09:42 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.707 Running I/O for 10 seconds... 00:17:27.637 Latency(us) 00:17:27.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.637 Nvme0n1 : 1.00 14684.00 57.36 0.00 0.00 0.00 0.00 0.00 00:17:27.637 =================================================================================================================== 00:17:27.637 Total : 14684.00 57.36 0.00 0.00 0.00 0.00 0.00 00:17:27.637 00:17:28.583 17:09:44 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:28.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.583 Nvme0n1 : 2.00 14734.00 57.55 0.00 0.00 0.00 0.00 0.00 00:17:28.583 =================================================================================================================== 00:17:28.583 Total : 14734.00 57.55 0.00 0.00 0.00 0.00 0.00 00:17:28.583 00:17:28.839 true 00:17:28.840 17:09:44 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:28.840 17:09:44 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:29.097 17:09:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:29.097 17:09:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:29.097 17:09:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 532652 00:17:29.662 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.662 Nvme0n1 : 3.00 14777.67 57.73 0.00 0.00 0.00 0.00 0.00 00:17:29.662 =================================================================================================================== 00:17:29.662 Total : 14777.67 57.73 0.00 0.00 0.00 0.00 0.00 00:17:29.662 00:17:30.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.594 Nvme0n1 : 4.00 14839.00 57.96 0.00 0.00 0.00 0.00 0.00 00:17:30.594 =================================================================================================================== 00:17:30.594 Total : 14839.00 57.96 0.00 0.00 0.00 0.00 0.00 00:17:30.594 00:17:31.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.970 Nvme0n1 : 5.00 14892.00 58.17 0.00 0.00 0.00 0.00 0.00 00:17:31.970 =================================================================================================================== 00:17:31.970 Total : 14892.00 58.17 0.00 0.00 0.00 0.00 0.00 00:17:31.970 00:17:32.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.904 Nvme0n1 : 6.00 14935.83 58.34 0.00 0.00 0.00 0.00 0.00 00:17:32.905 =================================================================================================================== 00:17:32.905 Total : 14935.83 58.34 0.00 0.00 0.00 0.00 0.00 00:17:32.905 00:17:33.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.840 Nvme0n1 : 7.00 14964.14 58.45 0.00 0.00 0.00 0.00 0.00 00:17:33.840 =================================================================================================================== 00:17:33.840 Total : 14964.14 58.45 0.00 0.00 0.00 0.00 0.00 00:17:33.840 00:17:34.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.775 Nvme0n1 : 8.00 14987.50 58.54 0.00 0.00 0.00 0.00 0.00 00:17:34.775 =================================================================================================================== 00:17:34.775 Total : 14987.50 58.54 0.00 0.00 0.00 0.00 0.00 00:17:34.775 00:17:35.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.716 Nvme0n1 : 9.00 15014.67 58.65 0.00 0.00 0.00 0.00 0.00 00:17:35.716 =================================================================================================================== 00:17:35.716 Total : 15014.67 58.65 0.00 0.00 0.00 0.00 0.00 00:17:35.716 00:17:36.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.654 Nvme0n1 : 10.00 15036.40 58.74 0.00 0.00 0.00 0.00 0.00 00:17:36.654 =================================================================================================================== 00:17:36.654 Total : 15036.40 58.74 0.00 0.00 0.00 0.00 0.00 00:17:36.654 00:17:36.654 00:17:36.654 Latency(us) 00:17:36.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.654 Nvme0n1 : 10.01 15039.36 58.75 0.00 0.00 8505.37 2281.62 12913.02 00:17:36.654 =================================================================================================================== 00:17:36.654 Total : 15039.36 58.75 0.00 0.00 8505.37 2281.62 12913.02 00:17:36.654 0 00:17:36.654 17:09:52 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 532374 00:17:36.654 17:09:52 -- common/autotest_common.sh@926 -- # '[' -z 532374 ']' 00:17:36.654 17:09:52 -- common/autotest_common.sh@930 -- # kill -0 532374 00:17:36.654 17:09:52 -- common/autotest_common.sh@931 -- # uname 00:17:36.654 17:09:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.654 17:09:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 532374 00:17:36.654 17:09:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:36.654 17:09:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:36.654 17:09:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 532374' 00:17:36.654 killing process with pid 532374 00:17:36.654 17:09:52 -- common/autotest_common.sh@945 -- # kill 532374 00:17:36.654 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.654 00:17:36.654 Latency(us) 00:17:36.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.654 =================================================================================================================== 00:17:36.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.654 17:09:52 -- common/autotest_common.sh@950 -- # wait 532374 00:17:36.910 17:09:52 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.167 17:09:53 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:37.167 17:09:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:37.436 17:09:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:37.436 17:09:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:37.436 17:09:53 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:37.694 [2024-07-20 17:09:53.734540] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:37.694 17:09:53 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:37.694 17:09:53 -- common/autotest_common.sh@640 -- # local es=0 00:17:37.694 17:09:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:37.694 17:09:53 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.694 17:09:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.694 17:09:53 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.694 17:09:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.694 17:09:53 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.694 17:09:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.694 17:09:53 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.694 17:09:53 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:37.694 17:09:53 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:37.951 request: 00:17:37.951 { 00:17:37.951 "uuid": "d8858fa8-d33f-4921-b1fa-e1550abd8372", 00:17:37.951 "method": "bdev_lvol_get_lvstores", 00:17:37.951 "req_id": 1 00:17:37.951 } 00:17:37.951 Got JSON-RPC error response 00:17:37.951 response: 00:17:37.951 { 00:17:37.951 "code": -19, 00:17:37.951 "message": "No such device" 00:17:37.951 } 00:17:37.951 17:09:54 -- common/autotest_common.sh@643 -- # es=1 00:17:37.951 17:09:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:37.951 17:09:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:37.951 17:09:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:37.951 17:09:54 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:38.209 aio_bdev 00:17:38.209 17:09:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e 00:17:38.209 17:09:54 -- common/autotest_common.sh@887 -- # local bdev_name=d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e 00:17:38.209 17:09:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:38.209 17:09:54 -- common/autotest_common.sh@889 -- # local i 00:17:38.209 17:09:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:38.209 17:09:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:38.209 17:09:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:38.466 17:09:54 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e -t 2000 00:17:38.728 [ 00:17:38.728 { 00:17:38.728 "name": "d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e", 00:17:38.728 "aliases": [ 00:17:38.728 "lvs/lvol" 00:17:38.728 ], 00:17:38.728 "product_name": "Logical Volume", 00:17:38.728 "block_size": 4096, 00:17:38.728 "num_blocks": 38912, 00:17:38.728 "uuid": "d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e", 00:17:38.728 "assigned_rate_limits": { 00:17:38.728 "rw_ios_per_sec": 0, 00:17:38.728 "rw_mbytes_per_sec": 0, 00:17:38.728 "r_mbytes_per_sec": 0, 00:17:38.728 "w_mbytes_per_sec": 0 00:17:38.728 }, 00:17:38.728 "claimed": false, 00:17:38.728 "zoned": false, 00:17:38.728 "supported_io_types": { 00:17:38.728 "read": true, 00:17:38.728 "write": true, 00:17:38.728 "unmap": true, 00:17:38.728 "write_zeroes": true, 00:17:38.728 "flush": false, 00:17:38.728 "reset": true, 00:17:38.728 "compare": false, 00:17:38.728 "compare_and_write": false, 00:17:38.728 "abort": false, 00:17:38.728 "nvme_admin": false, 00:17:38.728 "nvme_io": false 00:17:38.728 }, 00:17:38.728 "driver_specific": { 00:17:38.728 "lvol": { 00:17:38.728 "lvol_store_uuid": "d8858fa8-d33f-4921-b1fa-e1550abd8372", 00:17:38.728 "base_bdev": "aio_bdev", 00:17:38.728 "thin_provision": false, 00:17:38.728 "snapshot": false, 00:17:38.728 "clone": false, 00:17:38.728 "esnap_clone": false 00:17:38.728 } 00:17:38.728 } 00:17:38.728 } 00:17:38.728 ] 00:17:38.728 17:09:54 -- common/autotest_common.sh@895 -- # return 0 00:17:38.728 17:09:54 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:38.728 17:09:54 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:38.990 17:09:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:38.990 17:09:55 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:38.990 17:09:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:39.248 17:09:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:39.248 17:09:55 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d1ef4034-5dc0-4aba-8cd0-ecd666cf4c5e 00:17:39.505 17:09:55 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8858fa8-d33f-4921-b1fa-e1550abd8372 00:17:39.763 17:09:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.020 00:17:40.020 real 0m17.608s 00:17:40.020 user 0m17.201s 00:17:40.020 sys 0m1.900s 00:17:40.020 17:09:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.020 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:17:40.020 ************************************ 00:17:40.020 END TEST lvs_grow_clean 00:17:40.020 ************************************ 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:40.020 17:09:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.020 17:09:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.020 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:17:40.020 ************************************ 00:17:40.020 START TEST lvs_grow_dirty 00:17:40.020 ************************************ 00:17:40.020 17:09:56 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.020 17:09:56 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:40.277 17:09:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:40.277 17:09:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:40.534 17:09:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:40.534 17:09:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:40.534 17:09:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:40.792 17:09:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:40.792 17:09:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:40.792 17:09:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f718b4c2-0652-495d-9cf7-1cf93f52da85 lvol 150 00:17:41.050 17:09:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:41.050 17:09:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:41.050 17:09:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:41.308 [2024-07-20 17:09:57.330020] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:41.308 [2024-07-20 17:09:57.330121] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:41.308 true 00:17:41.308 17:09:57 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:41.308 17:09:57 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:41.567 17:09:57 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:41.567 17:09:57 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:41.825 17:09:57 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:42.083 17:09:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.341 17:09:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:42.598 17:09:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=534607 00:17:42.598 17:09:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:42.598 17:09:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.598 17:09:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 534607 /var/tmp/bdevperf.sock 00:17:42.598 17:09:58 -- common/autotest_common.sh@819 -- # '[' -z 534607 ']' 00:17:42.598 17:09:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.599 17:09:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.599 17:09:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.599 17:09:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.599 17:09:58 -- common/autotest_common.sh@10 -- # set +x 00:17:42.599 [2024-07-20 17:09:58.611429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:42.599 [2024-07-20 17:09:58.611519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid534607 ] 00:17:42.599 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.599 [2024-07-20 17:09:58.674388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.855 [2024-07-20 17:09:58.763761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.785 17:09:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.785 17:09:59 -- common/autotest_common.sh@852 -- # return 0 00:17:43.785 17:09:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:44.042 Nvme0n1 00:17:44.042 17:10:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:44.298 [ 00:17:44.298 { 00:17:44.298 "name": "Nvme0n1", 00:17:44.298 "aliases": [ 00:17:44.298 "c5c5e2b9-17cc-4ae1-a842-42290e83c5c6" 00:17:44.298 ], 00:17:44.298 "product_name": "NVMe disk", 00:17:44.298 "block_size": 4096, 00:17:44.298 "num_blocks": 38912, 00:17:44.298 "uuid": "c5c5e2b9-17cc-4ae1-a842-42290e83c5c6", 00:17:44.298 "assigned_rate_limits": { 00:17:44.298 "rw_ios_per_sec": 0, 00:17:44.298 "rw_mbytes_per_sec": 0, 00:17:44.298 "r_mbytes_per_sec": 0, 00:17:44.298 "w_mbytes_per_sec": 0 00:17:44.298 }, 00:17:44.298 "claimed": false, 00:17:44.298 "zoned": false, 00:17:44.298 "supported_io_types": { 00:17:44.298 "read": true, 00:17:44.298 "write": true, 00:17:44.298 "unmap": true, 00:17:44.298 "write_zeroes": true, 00:17:44.298 "flush": true, 00:17:44.298 "reset": true, 00:17:44.298 "compare": true, 00:17:44.298 "compare_and_write": true, 00:17:44.298 "abort": true, 00:17:44.298 "nvme_admin": true, 00:17:44.298 "nvme_io": true 00:17:44.298 }, 00:17:44.298 "driver_specific": { 00:17:44.298 "nvme": [ 00:17:44.298 { 00:17:44.298 "trid": { 00:17:44.298 "trtype": "TCP", 00:17:44.298 "adrfam": "IPv4", 00:17:44.298 "traddr": "10.0.0.2", 00:17:44.298 "trsvcid": "4420", 00:17:44.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:44.298 }, 00:17:44.298 "ctrlr_data": { 00:17:44.298 "cntlid": 1, 00:17:44.298 "vendor_id": "0x8086", 00:17:44.298 "model_number": "SPDK bdev Controller", 00:17:44.298 "serial_number": "SPDK0", 00:17:44.298 "firmware_revision": "24.01.1", 00:17:44.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:44.298 "oacs": { 00:17:44.298 "security": 0, 00:17:44.298 "format": 0, 00:17:44.298 "firmware": 0, 00:17:44.298 "ns_manage": 0 00:17:44.298 }, 00:17:44.298 "multi_ctrlr": true, 00:17:44.298 "ana_reporting": false 00:17:44.298 }, 00:17:44.298 "vs": { 00:17:44.298 "nvme_version": "1.3" 00:17:44.298 }, 00:17:44.298 "ns_data": { 00:17:44.298 "id": 1, 00:17:44.298 "can_share": true 00:17:44.298 } 00:17:44.298 } 00:17:44.298 ], 00:17:44.298 "mp_policy": "active_passive" 00:17:44.298 } 00:17:44.298 } 00:17:44.298 ] 00:17:44.298 17:10:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=534757 00:17:44.298 17:10:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:44.298 17:10:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:44.298 Running I/O for 10 seconds... 00:17:45.229 Latency(us) 00:17:45.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.229 Nvme0n1 : 1.00 14492.00 56.61 0.00 0.00 0.00 0.00 0.00 00:17:45.229 =================================================================================================================== 00:17:45.229 Total : 14492.00 56.61 0.00 0.00 0.00 0.00 0.00 00:17:45.229 00:17:46.159 17:10:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:46.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.416 Nvme0n1 : 2.00 14648.00 57.22 0.00 0.00 0.00 0.00 0.00 00:17:46.416 =================================================================================================================== 00:17:46.416 Total : 14648.00 57.22 0.00 0.00 0.00 0.00 0.00 00:17:46.416 00:17:46.416 true 00:17:46.416 17:10:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:46.416 17:10:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:46.673 17:10:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:46.673 17:10:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:46.673 17:10:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 534757 00:17:47.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.237 Nvme0n1 : 3.00 14729.33 57.54 0.00 0.00 0.00 0.00 0.00 00:17:47.237 =================================================================================================================== 00:17:47.237 Total : 14729.33 57.54 0.00 0.00 0.00 0.00 0.00 00:17:47.237 00:17:48.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.617 Nvme0n1 : 4.00 14823.00 57.90 0.00 0.00 0.00 0.00 0.00 00:17:48.617 =================================================================================================================== 00:17:48.617 Total : 14823.00 57.90 0.00 0.00 0.00 0.00 0.00 00:17:48.617 00:17:49.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.249 Nvme0n1 : 5.00 14879.20 58.12 0.00 0.00 0.00 0.00 0.00 00:17:49.249 =================================================================================================================== 00:17:49.249 Total : 14879.20 58.12 0.00 0.00 0.00 0.00 0.00 00:17:49.249 00:17:50.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.621 Nvme0n1 : 6.00 14916.67 58.27 0.00 0.00 0.00 0.00 0.00 00:17:50.621 =================================================================================================================== 00:17:50.621 Total : 14916.67 58.27 0.00 0.00 0.00 0.00 0.00 00:17:50.621 00:17:51.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.571 Nvme0n1 : 7.00 14968.86 58.47 0.00 0.00 0.00 0.00 0.00 00:17:51.571 =================================================================================================================== 00:17:51.571 Total : 14968.86 58.47 0.00 0.00 0.00 0.00 0.00 00:17:51.571 00:17:52.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.505 Nvme0n1 : 8.00 15008.12 58.63 0.00 0.00 0.00 0.00 0.00 00:17:52.505 =================================================================================================================== 00:17:52.505 Total : 15008.12 58.63 0.00 0.00 0.00 0.00 0.00 00:17:52.505 00:17:53.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.440 Nvme0n1 : 9.00 15035.89 58.73 0.00 0.00 0.00 0.00 0.00 00:17:53.440 =================================================================================================================== 00:17:53.440 Total : 15035.89 58.73 0.00 0.00 0.00 0.00 0.00 00:17:53.440 00:17:54.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.373 Nvme0n1 : 10.00 15064.00 58.84 0.00 0.00 0.00 0.00 0.00 00:17:54.373 =================================================================================================================== 00:17:54.373 Total : 15064.00 58.84 0.00 0.00 0.00 0.00 0.00 00:17:54.373 00:17:54.373 00:17:54.373 Latency(us) 00:17:54.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.373 Nvme0n1 : 10.01 15064.42 58.85 0.00 0.00 8491.18 2936.98 14757.74 00:17:54.373 =================================================================================================================== 00:17:54.373 Total : 15064.42 58.85 0.00 0.00 8491.18 2936.98 14757.74 00:17:54.373 0 00:17:54.373 17:10:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 534607 00:17:54.373 17:10:10 -- common/autotest_common.sh@926 -- # '[' -z 534607 ']' 00:17:54.374 17:10:10 -- common/autotest_common.sh@930 -- # kill -0 534607 00:17:54.374 17:10:10 -- common/autotest_common.sh@931 -- # uname 00:17:54.374 17:10:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:54.374 17:10:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 534607 00:17:54.374 17:10:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:54.374 17:10:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:54.374 17:10:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 534607' 00:17:54.374 killing process with pid 534607 00:17:54.374 17:10:10 -- common/autotest_common.sh@945 -- # kill 534607 00:17:54.374 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.374 00:17:54.374 Latency(us) 00:17:54.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.374 =================================================================================================================== 00:17:54.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.374 17:10:10 -- common/autotest_common.sh@950 -- # wait 534607 00:17:54.631 17:10:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:54.889 17:10:10 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:54.889 17:10:10 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:55.147 17:10:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:55.147 17:10:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:55.147 17:10:11 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 531922 00:17:55.147 17:10:11 -- target/nvmf_lvs_grow.sh@74 -- # wait 531922 00:17:55.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 531922 Killed "${NVMF_APP[@]}" "$@" 00:17:55.147 17:10:11 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:55.147 17:10:11 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:55.147 17:10:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:55.147 17:10:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:55.147 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:17:55.147 17:10:11 -- nvmf/common.sh@469 -- # nvmfpid=536121 00:17:55.147 17:10:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:55.147 17:10:11 -- nvmf/common.sh@470 -- # waitforlisten 536121 00:17:55.147 17:10:11 -- common/autotest_common.sh@819 -- # '[' -z 536121 ']' 00:17:55.147 17:10:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.147 17:10:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.147 17:10:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.147 17:10:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.147 17:10:11 -- common/autotest_common.sh@10 -- # set +x 00:17:55.147 [2024-07-20 17:10:11.272429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:55.147 [2024-07-20 17:10:11.272517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.405 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.405 [2024-07-20 17:10:11.349588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.405 [2024-07-20 17:10:11.439270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:55.405 [2024-07-20 17:10:11.439418] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.405 [2024-07-20 17:10:11.439436] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.405 [2024-07-20 17:10:11.439449] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.405 [2024-07-20 17:10:11.439477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.382 17:10:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.382 17:10:12 -- common/autotest_common.sh@852 -- # return 0 00:17:56.382 17:10:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:56.382 17:10:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:56.382 17:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:56.382 17:10:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.382 17:10:12 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:56.382 [2024-07-20 17:10:12.521582] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:56.382 [2024-07-20 17:10:12.521725] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:56.382 [2024-07-20 17:10:12.521770] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:56.640 17:10:12 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:56.640 17:10:12 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:56.640 17:10:12 -- common/autotest_common.sh@887 -- # local bdev_name=c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:56.640 17:10:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:56.640 17:10:12 -- common/autotest_common.sh@889 -- # local i 00:17:56.640 17:10:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:56.640 17:10:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:56.640 17:10:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:56.899 17:10:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 -t 2000 00:17:57.157 [ 00:17:57.157 { 00:17:57.157 "name": "c5c5e2b9-17cc-4ae1-a842-42290e83c5c6", 00:17:57.157 "aliases": [ 00:17:57.157 "lvs/lvol" 00:17:57.157 ], 00:17:57.157 "product_name": "Logical Volume", 00:17:57.157 "block_size": 4096, 00:17:57.157 "num_blocks": 38912, 00:17:57.157 "uuid": "c5c5e2b9-17cc-4ae1-a842-42290e83c5c6", 00:17:57.157 "assigned_rate_limits": { 00:17:57.157 "rw_ios_per_sec": 0, 00:17:57.157 "rw_mbytes_per_sec": 0, 00:17:57.157 "r_mbytes_per_sec": 0, 00:17:57.157 "w_mbytes_per_sec": 0 00:17:57.157 }, 00:17:57.157 "claimed": false, 00:17:57.157 "zoned": false, 00:17:57.157 "supported_io_types": { 00:17:57.157 "read": true, 00:17:57.157 "write": true, 00:17:57.157 "unmap": true, 00:17:57.157 "write_zeroes": true, 00:17:57.157 "flush": false, 00:17:57.157 "reset": true, 00:17:57.157 "compare": false, 00:17:57.157 "compare_and_write": false, 00:17:57.157 "abort": false, 00:17:57.157 "nvme_admin": false, 00:17:57.157 "nvme_io": false 00:17:57.157 }, 00:17:57.157 "driver_specific": { 00:17:57.157 "lvol": { 00:17:57.157 "lvol_store_uuid": "f718b4c2-0652-495d-9cf7-1cf93f52da85", 00:17:57.157 "base_bdev": "aio_bdev", 00:17:57.157 "thin_provision": false, 00:17:57.157 "snapshot": false, 00:17:57.157 "clone": false, 00:17:57.157 "esnap_clone": false 00:17:57.157 } 00:17:57.157 } 00:17:57.157 } 00:17:57.157 ] 00:17:57.157 17:10:13 -- common/autotest_common.sh@895 -- # return 0 00:17:57.157 17:10:13 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:57.157 17:10:13 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:57.414 17:10:13 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:57.414 17:10:13 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:57.414 17:10:13 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:57.672 17:10:13 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:57.672 17:10:13 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:57.930 [2024-07-20 17:10:13.866966] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:57.930 17:10:13 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:57.930 17:10:13 -- common/autotest_common.sh@640 -- # local es=0 00:17:57.930 17:10:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:57.930 17:10:13 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.930 17:10:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.930 17:10:13 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.930 17:10:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.930 17:10:13 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.930 17:10:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.930 17:10:13 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.930 17:10:13 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:57.930 17:10:13 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:58.188 request: 00:17:58.188 { 00:17:58.188 "uuid": "f718b4c2-0652-495d-9cf7-1cf93f52da85", 00:17:58.188 "method": "bdev_lvol_get_lvstores", 00:17:58.188 "req_id": 1 00:17:58.188 } 00:17:58.188 Got JSON-RPC error response 00:17:58.188 response: 00:17:58.188 { 00:17:58.188 "code": -19, 00:17:58.188 "message": "No such device" 00:17:58.188 } 00:17:58.188 17:10:14 -- common/autotest_common.sh@643 -- # es=1 00:17:58.188 17:10:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:58.188 17:10:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:58.188 17:10:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:58.188 17:10:14 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:58.446 aio_bdev 00:17:58.446 17:10:14 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:58.446 17:10:14 -- common/autotest_common.sh@887 -- # local bdev_name=c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:58.446 17:10:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:58.446 17:10:14 -- common/autotest_common.sh@889 -- # local i 00:17:58.446 17:10:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:58.446 17:10:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:58.446 17:10:14 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.704 17:10:14 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 -t 2000 00:17:58.704 [ 00:17:58.704 { 00:17:58.704 "name": "c5c5e2b9-17cc-4ae1-a842-42290e83c5c6", 00:17:58.704 "aliases": [ 00:17:58.704 "lvs/lvol" 00:17:58.704 ], 00:17:58.704 "product_name": "Logical Volume", 00:17:58.704 "block_size": 4096, 00:17:58.704 "num_blocks": 38912, 00:17:58.704 "uuid": "c5c5e2b9-17cc-4ae1-a842-42290e83c5c6", 00:17:58.704 "assigned_rate_limits": { 00:17:58.704 "rw_ios_per_sec": 0, 00:17:58.704 "rw_mbytes_per_sec": 0, 00:17:58.704 "r_mbytes_per_sec": 0, 00:17:58.704 "w_mbytes_per_sec": 0 00:17:58.704 }, 00:17:58.704 "claimed": false, 00:17:58.704 "zoned": false, 00:17:58.704 "supported_io_types": { 00:17:58.704 "read": true, 00:17:58.704 "write": true, 00:17:58.704 "unmap": true, 00:17:58.704 "write_zeroes": true, 00:17:58.704 "flush": false, 00:17:58.704 "reset": true, 00:17:58.704 "compare": false, 00:17:58.704 "compare_and_write": false, 00:17:58.704 "abort": false, 00:17:58.705 "nvme_admin": false, 00:17:58.705 "nvme_io": false 00:17:58.705 }, 00:17:58.705 "driver_specific": { 00:17:58.705 "lvol": { 00:17:58.705 "lvol_store_uuid": "f718b4c2-0652-495d-9cf7-1cf93f52da85", 00:17:58.705 "base_bdev": "aio_bdev", 00:17:58.705 "thin_provision": false, 00:17:58.705 "snapshot": false, 00:17:58.705 "clone": false, 00:17:58.705 "esnap_clone": false 00:17:58.705 } 00:17:58.705 } 00:17:58.705 } 00:17:58.705 ] 00:17:58.705 17:10:14 -- common/autotest_common.sh@895 -- # return 0 00:17:58.705 17:10:14 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:58.705 17:10:14 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:58.962 17:10:15 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:58.962 17:10:15 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:17:58.962 17:10:15 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:59.220 17:10:15 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:59.220 17:10:15 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c5c5e2b9-17cc-4ae1-a842-42290e83c5c6 00:17:59.478 17:10:15 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f718b4c2-0652-495d-9cf7-1cf93f52da85 00:18:00.043 17:10:15 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:00.043 17:10:16 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:00.043 00:18:00.043 real 0m20.088s 00:18:00.043 user 0m49.971s 00:18:00.043 sys 0m4.837s 00:18:00.043 17:10:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.043 17:10:16 -- common/autotest_common.sh@10 -- # set +x 00:18:00.043 ************************************ 00:18:00.043 END TEST lvs_grow_dirty 00:18:00.044 ************************************ 00:18:00.044 17:10:16 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:00.044 17:10:16 -- common/autotest_common.sh@796 -- # type=--id 00:18:00.044 17:10:16 -- common/autotest_common.sh@797 -- # id=0 00:18:00.044 17:10:16 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:00.044 17:10:16 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:00.044 17:10:16 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:00.044 17:10:16 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:00.044 17:10:16 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:00.044 17:10:16 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:00.044 nvmf_trace.0 00:18:00.301 17:10:16 -- common/autotest_common.sh@811 -- # return 0 00:18:00.301 17:10:16 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:00.301 17:10:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:00.301 17:10:16 -- nvmf/common.sh@116 -- # sync 00:18:00.301 17:10:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:00.301 17:10:16 -- nvmf/common.sh@119 -- # set +e 00:18:00.301 17:10:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:00.301 17:10:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:00.301 rmmod nvme_tcp 00:18:00.301 rmmod nvme_fabrics 00:18:00.301 rmmod nvme_keyring 00:18:00.301 17:10:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:00.301 17:10:16 -- nvmf/common.sh@123 -- # set -e 00:18:00.301 17:10:16 -- nvmf/common.sh@124 -- # return 0 00:18:00.301 17:10:16 -- nvmf/common.sh@477 -- # '[' -n 536121 ']' 00:18:00.301 17:10:16 -- nvmf/common.sh@478 -- # killprocess 536121 00:18:00.301 17:10:16 -- common/autotest_common.sh@926 -- # '[' -z 536121 ']' 00:18:00.301 17:10:16 -- common/autotest_common.sh@930 -- # kill -0 536121 00:18:00.301 17:10:16 -- common/autotest_common.sh@931 -- # uname 00:18:00.301 17:10:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:00.301 17:10:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 536121 00:18:00.301 17:10:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:00.301 17:10:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:00.301 17:10:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 536121' 00:18:00.301 killing process with pid 536121 00:18:00.301 17:10:16 -- common/autotest_common.sh@945 -- # kill 536121 00:18:00.301 17:10:16 -- common/autotest_common.sh@950 -- # wait 536121 00:18:00.560 17:10:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:00.560 17:10:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:00.560 17:10:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:00.560 17:10:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:00.560 17:10:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:00.560 17:10:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.560 17:10:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.560 17:10:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.459 17:10:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:02.459 00:18:02.459 real 0m43.596s 00:18:02.459 user 1m13.677s 00:18:02.459 sys 0m8.614s 00:18:02.459 17:10:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.459 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:18:02.459 ************************************ 00:18:02.459 END TEST nvmf_lvs_grow 00:18:02.459 ************************************ 00:18:02.459 17:10:18 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:02.459 17:10:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:02.459 17:10:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.459 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:18:02.459 ************************************ 00:18:02.459 START TEST nvmf_bdev_io_wait 00:18:02.459 ************************************ 00:18:02.459 17:10:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:02.459 * Looking for test storage... 00:18:02.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.716 17:10:18 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.716 17:10:18 -- nvmf/common.sh@7 -- # uname -s 00:18:02.716 17:10:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.716 17:10:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.716 17:10:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.716 17:10:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.716 17:10:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.716 17:10:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.716 17:10:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.716 17:10:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.716 17:10:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.716 17:10:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.716 17:10:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.716 17:10:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.716 17:10:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.716 17:10:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.716 17:10:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.716 17:10:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.716 17:10:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.716 17:10:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.716 17:10:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.716 17:10:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.716 17:10:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.716 17:10:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.716 17:10:18 -- paths/export.sh@5 -- # export PATH 00:18:02.716 17:10:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.716 17:10:18 -- nvmf/common.sh@46 -- # : 0 00:18:02.716 17:10:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.716 17:10:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.716 17:10:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.716 17:10:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.716 17:10:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.716 17:10:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.716 17:10:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.716 17:10:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.716 17:10:18 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.716 17:10:18 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.716 17:10:18 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:02.716 17:10:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.716 17:10:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.716 17:10:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.716 17:10:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.716 17:10:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.716 17:10:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.716 17:10:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.716 17:10:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.716 17:10:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:02.716 17:10:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:02.717 17:10:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:02.717 17:10:18 -- common/autotest_common.sh@10 -- # set +x 00:18:04.639 17:10:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:04.639 17:10:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:04.639 17:10:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:04.639 17:10:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:04.639 17:10:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:04.639 17:10:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:04.639 17:10:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:04.639 17:10:20 -- nvmf/common.sh@294 -- # net_devs=() 00:18:04.639 17:10:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:04.639 17:10:20 -- nvmf/common.sh@295 -- # e810=() 00:18:04.639 17:10:20 -- nvmf/common.sh@295 -- # local -ga e810 00:18:04.639 17:10:20 -- nvmf/common.sh@296 -- # x722=() 00:18:04.639 17:10:20 -- nvmf/common.sh@296 -- # local -ga x722 00:18:04.639 17:10:20 -- nvmf/common.sh@297 -- # mlx=() 00:18:04.639 17:10:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:04.639 17:10:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.639 17:10:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:04.639 17:10:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:04.639 17:10:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:04.639 17:10:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.639 17:10:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:04.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:04.639 17:10:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.639 17:10:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:04.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:04.639 17:10:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:04.639 17:10:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.639 17:10:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.639 17:10:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.639 17:10:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.639 17:10:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:04.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:04.639 17:10:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.639 17:10:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.639 17:10:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.639 17:10:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.639 17:10:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.639 17:10:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:04.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:04.639 17:10:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.639 17:10:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:04.639 17:10:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:04.639 17:10:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:04.639 17:10:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:04.639 17:10:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.639 17:10:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.639 17:10:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.639 17:10:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:04.639 17:10:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.639 17:10:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.639 17:10:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:04.639 17:10:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.639 17:10:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.639 17:10:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:04.639 17:10:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:04.639 17:10:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.639 17:10:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.639 17:10:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.639 17:10:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.639 17:10:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:04.639 17:10:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.639 17:10:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.639 17:10:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.639 17:10:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:04.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:18:04.639 00:18:04.639 --- 10.0.0.2 ping statistics --- 00:18:04.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.639 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:18:04.639 17:10:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:18:04.639 00:18:04.639 --- 10.0.0.1 ping statistics --- 00:18:04.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.639 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:04.639 17:10:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.639 17:10:20 -- nvmf/common.sh@410 -- # return 0 00:18:04.639 17:10:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:04.640 17:10:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.640 17:10:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:04.640 17:10:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:04.640 17:10:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.640 17:10:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:04.640 17:10:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:04.640 17:10:20 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:04.640 17:10:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.640 17:10:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:04.640 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.640 17:10:20 -- nvmf/common.sh@469 -- # nvmfpid=538680 00:18:04.640 17:10:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:04.640 17:10:20 -- nvmf/common.sh@470 -- # waitforlisten 538680 00:18:04.640 17:10:20 -- common/autotest_common.sh@819 -- # '[' -z 538680 ']' 00:18:04.640 17:10:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.640 17:10:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.640 17:10:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.640 17:10:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.640 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.640 [2024-07-20 17:10:20.724071] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:04.640 [2024-07-20 17:10:20.724171] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.640 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.640 [2024-07-20 17:10:20.789136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.897 [2024-07-20 17:10:20.874940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.897 [2024-07-20 17:10:20.875089] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.897 [2024-07-20 17:10:20.875107] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.897 [2024-07-20 17:10:20.875120] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.897 [2024-07-20 17:10:20.875184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.897 [2024-07-20 17:10:20.875696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.897 [2024-07-20 17:10:20.875785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.897 [2024-07-20 17:10:20.875788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.897 17:10:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.897 17:10:20 -- common/autotest_common.sh@852 -- # return 0 00:18:04.897 17:10:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.897 17:10:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:04.897 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.897 17:10:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.897 17:10:20 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:04.897 17:10:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.897 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.897 17:10:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.897 17:10:20 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:04.897 17:10:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.897 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:18:04.897 17:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.897 17:10:21 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:04.897 17:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.897 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:18:04.897 [2024-07-20 17:10:21.027584] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.897 17:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.897 17:10:21 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:04.897 17:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.897 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:18:05.154 Malloc0 00:18:05.154 17:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.154 17:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.154 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:18:05.154 17:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.154 17:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.154 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:18:05.154 17:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.154 17:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.154 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:18:05.154 [2024-07-20 17:10:21.089465] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.154 17:10:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=538828 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@30 -- # READ_PID=538829 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # config=() 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=538832 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # local subsystem config 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:05.154 17:10:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.154 17:10:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.154 { 00:18:05.154 "params": { 00:18:05.154 "name": "Nvme$subsystem", 00:18:05.154 "trtype": "$TEST_TRANSPORT", 00:18:05.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.154 "adrfam": "ipv4", 00:18:05.154 "trsvcid": "$NVMF_PORT", 00:18:05.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.154 "hdgst": ${hdgst:-false}, 00:18:05.154 "ddgst": ${ddgst:-false} 00:18:05.154 }, 00:18:05.154 "method": "bdev_nvme_attach_controller" 00:18:05.154 } 00:18:05.154 EOF 00:18:05.154 )") 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # config=() 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # local subsystem config 00:18:05.154 17:10:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.154 17:10:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.154 { 00:18:05.154 "params": { 00:18:05.154 "name": "Nvme$subsystem", 00:18:05.154 "trtype": "$TEST_TRANSPORT", 00:18:05.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.154 "adrfam": "ipv4", 00:18:05.154 "trsvcid": "$NVMF_PORT", 00:18:05.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.154 "hdgst": ${hdgst:-false}, 00:18:05.154 "ddgst": ${ddgst:-false} 00:18:05.154 }, 00:18:05.154 "method": "bdev_nvme_attach_controller" 00:18:05.154 } 00:18:05.154 EOF 00:18:05.154 )") 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=538834 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@35 -- # sync 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # config=() 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # local subsystem config 00:18:05.154 17:10:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:05.154 17:10:21 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:05.154 17:10:21 -- nvmf/common.sh@542 -- # cat 00:18:05.154 17:10:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.154 { 00:18:05.154 "params": { 00:18:05.154 "name": "Nvme$subsystem", 00:18:05.154 "trtype": "$TEST_TRANSPORT", 00:18:05.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.154 "adrfam": "ipv4", 00:18:05.154 "trsvcid": "$NVMF_PORT", 00:18:05.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.154 "hdgst": ${hdgst:-false}, 00:18:05.154 "ddgst": ${ddgst:-false} 00:18:05.154 }, 00:18:05.154 "method": "bdev_nvme_attach_controller" 00:18:05.154 } 00:18:05.154 EOF 00:18:05.154 )") 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # config=() 00:18:05.154 17:10:21 -- nvmf/common.sh@520 -- # local subsystem config 00:18:05.154 17:10:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:05.154 17:10:21 -- nvmf/common.sh@542 -- # cat 00:18:05.154 17:10:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:05.154 { 00:18:05.154 "params": { 00:18:05.154 "name": "Nvme$subsystem", 00:18:05.154 "trtype": "$TEST_TRANSPORT", 00:18:05.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:05.155 "adrfam": "ipv4", 00:18:05.155 "trsvcid": "$NVMF_PORT", 00:18:05.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:05.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:05.155 "hdgst": ${hdgst:-false}, 00:18:05.155 "ddgst": ${ddgst:-false} 00:18:05.155 }, 00:18:05.155 "method": "bdev_nvme_attach_controller" 00:18:05.155 } 00:18:05.155 EOF 00:18:05.155 )") 00:18:05.155 17:10:21 -- nvmf/common.sh@542 -- # cat 00:18:05.155 17:10:21 -- nvmf/common.sh@542 -- # cat 00:18:05.155 17:10:21 -- target/bdev_io_wait.sh@37 -- # wait 538828 00:18:05.155 17:10:21 -- nvmf/common.sh@544 -- # jq . 00:18:05.155 17:10:21 -- nvmf/common.sh@544 -- # jq . 00:18:05.155 17:10:21 -- nvmf/common.sh@544 -- # jq . 00:18:05.155 17:10:21 -- nvmf/common.sh@545 -- # IFS=, 00:18:05.155 17:10:21 -- nvmf/common.sh@544 -- # jq . 00:18:05.155 17:10:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:05.155 "params": { 00:18:05.155 "name": "Nvme1", 00:18:05.155 "trtype": "tcp", 00:18:05.155 "traddr": "10.0.0.2", 00:18:05.155 "adrfam": "ipv4", 00:18:05.155 "trsvcid": "4420", 00:18:05.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.155 "hdgst": false, 00:18:05.155 "ddgst": false 00:18:05.155 }, 00:18:05.155 "method": "bdev_nvme_attach_controller" 00:18:05.155 }' 00:18:05.155 17:10:21 -- nvmf/common.sh@545 -- # IFS=, 00:18:05.155 17:10:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:05.155 "params": { 00:18:05.155 "name": "Nvme1", 00:18:05.155 "trtype": "tcp", 00:18:05.155 "traddr": "10.0.0.2", 00:18:05.155 "adrfam": "ipv4", 00:18:05.155 "trsvcid": "4420", 00:18:05.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.155 "hdgst": false, 00:18:05.155 "ddgst": false 00:18:05.155 }, 00:18:05.155 "method": "bdev_nvme_attach_controller" 00:18:05.155 }' 00:18:05.155 17:10:21 -- nvmf/common.sh@545 -- # IFS=, 00:18:05.155 17:10:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:05.155 "params": { 00:18:05.155 "name": "Nvme1", 00:18:05.155 "trtype": "tcp", 00:18:05.155 "traddr": "10.0.0.2", 00:18:05.155 "adrfam": "ipv4", 00:18:05.155 "trsvcid": "4420", 00:18:05.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.155 "hdgst": false, 00:18:05.155 "ddgst": false 00:18:05.155 }, 00:18:05.155 "method": "bdev_nvme_attach_controller" 00:18:05.155 }' 00:18:05.155 17:10:21 -- nvmf/common.sh@545 -- # IFS=, 00:18:05.155 17:10:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:05.155 "params": { 00:18:05.155 "name": "Nvme1", 00:18:05.155 "trtype": "tcp", 00:18:05.155 "traddr": "10.0.0.2", 00:18:05.155 "adrfam": "ipv4", 00:18:05.155 "trsvcid": "4420", 00:18:05.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.155 "hdgst": false, 00:18:05.155 "ddgst": false 00:18:05.155 }, 00:18:05.155 "method": "bdev_nvme_attach_controller" 00:18:05.155 }' 00:18:05.155 [2024-07-20 17:10:21.134865] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.155 [2024-07-20 17:10:21.134938] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:05.155 [2024-07-20 17:10:21.135392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.155 [2024-07-20 17:10:21.135393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.155 [2024-07-20 17:10:21.135391] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:05.155 [2024-07-20 17:10:21.135475] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-20 17:10:21.135476] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-20 17:10:21.135476] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:05.155 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:05.155 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:05.155 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.155 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.155 [2024-07-20 17:10:21.309400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.412 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.412 [2024-07-20 17:10:21.381883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:05.412 [2024-07-20 17:10:21.408570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.412 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.412 [2024-07-20 17:10:21.481335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:05.412 [2024-07-20 17:10:21.506871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.670 [2024-07-20 17:10:21.579023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:05.670 [2024-07-20 17:10:21.579492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.670 [2024-07-20 17:10:21.645757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:05.670 Running I/O for 1 seconds... 00:18:05.927 Running I/O for 1 seconds... 00:18:05.927 Running I/O for 1 seconds... 00:18:05.927 Running I/O for 1 seconds... 00:18:06.863 00:18:06.863 Latency(us) 00:18:06.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.863 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:06.863 Nvme1n1 : 1.01 7338.41 28.67 0.00 0.00 17300.01 5728.33 25826.04 00:18:06.863 =================================================================================================================== 00:18:06.863 Total : 7338.41 28.67 0.00 0.00 17300.01 5728.33 25826.04 00:18:06.863 00:18:06.863 Latency(us) 00:18:06.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.863 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:06.863 Nvme1n1 : 1.00 196904.79 769.16 0.00 0.00 647.56 263.96 825.27 00:18:06.863 =================================================================================================================== 00:18:06.863 Total : 196904.79 769.16 0.00 0.00 647.56 263.96 825.27 00:18:06.863 00:18:06.863 Latency(us) 00:18:06.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.863 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:06.863 Nvme1n1 : 1.01 6886.86 26.90 0.00 0.00 18535.98 7912.87 37088.52 00:18:06.863 =================================================================================================================== 00:18:06.863 Total : 6886.86 26.90 0.00 0.00 18535.98 7912.87 37088.52 00:18:06.863 00:18:06.863 Latency(us) 00:18:06.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.863 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:06.863 Nvme1n1 : 1.01 6890.76 26.92 0.00 0.00 18452.91 7524.50 29127.11 00:18:06.863 =================================================================================================================== 00:18:06.863 Total : 6890.76 26.92 0.00 0.00 18452.91 7524.50 29127.11 00:18:07.120 17:10:23 -- target/bdev_io_wait.sh@38 -- # wait 538829 00:18:07.120 17:10:23 -- target/bdev_io_wait.sh@39 -- # wait 538832 00:18:07.120 17:10:23 -- target/bdev_io_wait.sh@40 -- # wait 538834 00:18:07.120 17:10:23 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.120 17:10:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:07.120 17:10:23 -- common/autotest_common.sh@10 -- # set +x 00:18:07.120 17:10:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:07.120 17:10:23 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:07.120 17:10:23 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:07.120 17:10:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:07.120 17:10:23 -- nvmf/common.sh@116 -- # sync 00:18:07.120 17:10:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.120 17:10:23 -- nvmf/common.sh@119 -- # set +e 00:18:07.120 17:10:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.120 17:10:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.120 rmmod nvme_tcp 00:18:07.378 rmmod nvme_fabrics 00:18:07.378 rmmod nvme_keyring 00:18:07.378 17:10:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.378 17:10:23 -- nvmf/common.sh@123 -- # set -e 00:18:07.378 17:10:23 -- nvmf/common.sh@124 -- # return 0 00:18:07.378 17:10:23 -- nvmf/common.sh@477 -- # '[' -n 538680 ']' 00:18:07.378 17:10:23 -- nvmf/common.sh@478 -- # killprocess 538680 00:18:07.378 17:10:23 -- common/autotest_common.sh@926 -- # '[' -z 538680 ']' 00:18:07.378 17:10:23 -- common/autotest_common.sh@930 -- # kill -0 538680 00:18:07.378 17:10:23 -- common/autotest_common.sh@931 -- # uname 00:18:07.378 17:10:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:07.378 17:10:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 538680 00:18:07.378 17:10:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:07.378 17:10:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:07.378 17:10:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 538680' 00:18:07.378 killing process with pid 538680 00:18:07.378 17:10:23 -- common/autotest_common.sh@945 -- # kill 538680 00:18:07.378 17:10:23 -- common/autotest_common.sh@950 -- # wait 538680 00:18:07.634 17:10:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.634 17:10:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.634 17:10:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.634 17:10:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.634 17:10:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.634 17:10:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.634 17:10:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.634 17:10:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.532 17:10:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:09.532 00:18:09.532 real 0m7.060s 00:18:09.532 user 0m16.076s 00:18:09.532 sys 0m3.274s 00:18:09.532 17:10:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.532 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:18:09.532 ************************************ 00:18:09.532 END TEST nvmf_bdev_io_wait 00:18:09.532 ************************************ 00:18:09.532 17:10:25 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:09.532 17:10:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:09.532 17:10:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.532 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:18:09.532 ************************************ 00:18:09.532 START TEST nvmf_queue_depth 00:18:09.532 ************************************ 00:18:09.532 17:10:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:09.789 * Looking for test storage... 00:18:09.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.790 17:10:25 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.790 17:10:25 -- nvmf/common.sh@7 -- # uname -s 00:18:09.790 17:10:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.790 17:10:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.790 17:10:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.790 17:10:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.790 17:10:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.790 17:10:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.790 17:10:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.790 17:10:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.790 17:10:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.790 17:10:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.790 17:10:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.790 17:10:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.790 17:10:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.790 17:10:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.790 17:10:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.790 17:10:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.790 17:10:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.790 17:10:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.790 17:10:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.790 17:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.790 17:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.790 17:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.790 17:10:25 -- paths/export.sh@5 -- # export PATH 00:18:09.790 17:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.790 17:10:25 -- nvmf/common.sh@46 -- # : 0 00:18:09.790 17:10:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.790 17:10:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.790 17:10:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.790 17:10:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.790 17:10:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.790 17:10:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.790 17:10:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.790 17:10:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.790 17:10:25 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:09.790 17:10:25 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:09.790 17:10:25 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.790 17:10:25 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:09.790 17:10:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.790 17:10:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.790 17:10:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.790 17:10:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.790 17:10:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.790 17:10:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.790 17:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.790 17:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.790 17:10:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:09.790 17:10:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:09.790 17:10:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:09.790 17:10:25 -- common/autotest_common.sh@10 -- # set +x 00:18:11.689 17:10:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:11.689 17:10:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:11.689 17:10:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:11.689 17:10:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:11.689 17:10:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:11.689 17:10:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:11.689 17:10:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:11.689 17:10:27 -- nvmf/common.sh@294 -- # net_devs=() 00:18:11.689 17:10:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:11.689 17:10:27 -- nvmf/common.sh@295 -- # e810=() 00:18:11.689 17:10:27 -- nvmf/common.sh@295 -- # local -ga e810 00:18:11.689 17:10:27 -- nvmf/common.sh@296 -- # x722=() 00:18:11.689 17:10:27 -- nvmf/common.sh@296 -- # local -ga x722 00:18:11.689 17:10:27 -- nvmf/common.sh@297 -- # mlx=() 00:18:11.689 17:10:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:11.689 17:10:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.689 17:10:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:11.689 17:10:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:11.689 17:10:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:11.689 17:10:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.689 17:10:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.689 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.689 17:10:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.689 17:10:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.689 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.689 17:10:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:11.689 17:10:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.689 17:10:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.689 17:10:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.689 17:10:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.689 17:10:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.689 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.689 17:10:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.689 17:10:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.689 17:10:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.689 17:10:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.689 17:10:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.689 17:10:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.689 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.689 17:10:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.689 17:10:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:11.689 17:10:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:11.689 17:10:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:11.689 17:10:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.689 17:10:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.689 17:10:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.689 17:10:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:11.689 17:10:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.689 17:10:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.689 17:10:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:11.689 17:10:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.689 17:10:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.689 17:10:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:11.689 17:10:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:11.689 17:10:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.689 17:10:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.689 17:10:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.689 17:10:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.689 17:10:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:11.689 17:10:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.689 17:10:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.689 17:10:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.689 17:10:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:11.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:18:11.689 00:18:11.689 --- 10.0.0.2 ping statistics --- 00:18:11.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.689 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:18:11.689 17:10:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:18:11.689 00:18:11.689 --- 10.0.0.1 ping statistics --- 00:18:11.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.689 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:18:11.689 17:10:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.689 17:10:27 -- nvmf/common.sh@410 -- # return 0 00:18:11.689 17:10:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.689 17:10:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.689 17:10:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:11.689 17:10:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.689 17:10:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:11.689 17:10:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.689 17:10:27 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:11.690 17:10:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:11.690 17:10:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:11.690 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.690 17:10:27 -- nvmf/common.sh@469 -- # nvmfpid=541009 00:18:11.690 17:10:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.690 17:10:27 -- nvmf/common.sh@470 -- # waitforlisten 541009 00:18:11.690 17:10:27 -- common/autotest_common.sh@819 -- # '[' -z 541009 ']' 00:18:11.690 17:10:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.690 17:10:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.690 17:10:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.690 17:10:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.690 17:10:27 -- common/autotest_common.sh@10 -- # set +x 00:18:11.690 [2024-07-20 17:10:27.833784] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:11.690 [2024-07-20 17:10:27.833884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.948 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.948 [2024-07-20 17:10:27.908689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.948 [2024-07-20 17:10:28.001424] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.948 [2024-07-20 17:10:28.001591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.948 [2024-07-20 17:10:28.001610] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.948 [2024-07-20 17:10:28.001624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.948 [2024-07-20 17:10:28.001662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.881 17:10:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.881 17:10:28 -- common/autotest_common.sh@852 -- # return 0 00:18:12.881 17:10:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:12.881 17:10:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 17:10:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.881 17:10:28 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.881 17:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 [2024-07-20 17:10:28.823724] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.881 17:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.881 17:10:28 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:12.881 17:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 Malloc0 00:18:12.881 17:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.881 17:10:28 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.881 17:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 17:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.881 17:10:28 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.881 17:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 17:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.881 17:10:28 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.881 17:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 [2024-07-20 17:10:28.884100] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.881 17:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.881 17:10:28 -- target/queue_depth.sh@30 -- # bdevperf_pid=541122 00:18:12.881 17:10:28 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:12.881 17:10:28 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.881 17:10:28 -- target/queue_depth.sh@33 -- # waitforlisten 541122 /var/tmp/bdevperf.sock 00:18:12.881 17:10:28 -- common/autotest_common.sh@819 -- # '[' -z 541122 ']' 00:18:12.881 17:10:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.881 17:10:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:12.881 17:10:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.881 17:10:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:12.881 17:10:28 -- common/autotest_common.sh@10 -- # set +x 00:18:12.881 [2024-07-20 17:10:28.927553] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:12.881 [2024-07-20 17:10:28.927629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid541122 ] 00:18:12.881 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.881 [2024-07-20 17:10:28.991335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.143 [2024-07-20 17:10:29.080816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.074 17:10:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:14.075 17:10:29 -- common/autotest_common.sh@852 -- # return 0 00:18:14.075 17:10:29 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.075 17:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:14.075 17:10:29 -- common/autotest_common.sh@10 -- # set +x 00:18:14.075 NVMe0n1 00:18:14.075 17:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:14.075 17:10:30 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.075 Running I/O for 10 seconds... 00:18:26.268 00:18:26.268 Latency(us) 00:18:26.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.268 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:26.268 Verification LBA range: start 0x0 length 0x4000 00:18:26.268 NVMe0n1 : 10.11 13271.52 51.84 0.00 0.00 76563.61 14563.56 60584.39 00:18:26.268 =================================================================================================================== 00:18:26.268 Total : 13271.52 51.84 0.00 0.00 76563.61 14563.56 60584.39 00:18:26.268 0 00:18:26.268 17:10:40 -- target/queue_depth.sh@39 -- # killprocess 541122 00:18:26.268 17:10:40 -- common/autotest_common.sh@926 -- # '[' -z 541122 ']' 00:18:26.268 17:10:40 -- common/autotest_common.sh@930 -- # kill -0 541122 00:18:26.268 17:10:40 -- common/autotest_common.sh@931 -- # uname 00:18:26.268 17:10:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:26.268 17:10:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 541122 00:18:26.268 17:10:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:26.268 17:10:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:26.268 17:10:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 541122' 00:18:26.268 killing process with pid 541122 00:18:26.268 17:10:40 -- common/autotest_common.sh@945 -- # kill 541122 00:18:26.268 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.268 00:18:26.268 Latency(us) 00:18:26.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.268 =================================================================================================================== 00:18:26.268 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:26.268 17:10:40 -- common/autotest_common.sh@950 -- # wait 541122 00:18:26.268 17:10:40 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:26.268 17:10:40 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:26.268 17:10:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:26.268 17:10:40 -- nvmf/common.sh@116 -- # sync 00:18:26.268 17:10:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:26.268 17:10:40 -- nvmf/common.sh@119 -- # set +e 00:18:26.268 17:10:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:26.268 17:10:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:26.268 rmmod nvme_tcp 00:18:26.268 rmmod nvme_fabrics 00:18:26.268 rmmod nvme_keyring 00:18:26.268 17:10:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:26.268 17:10:40 -- nvmf/common.sh@123 -- # set -e 00:18:26.268 17:10:40 -- nvmf/common.sh@124 -- # return 0 00:18:26.268 17:10:40 -- nvmf/common.sh@477 -- # '[' -n 541009 ']' 00:18:26.268 17:10:40 -- nvmf/common.sh@478 -- # killprocess 541009 00:18:26.268 17:10:40 -- common/autotest_common.sh@926 -- # '[' -z 541009 ']' 00:18:26.268 17:10:40 -- common/autotest_common.sh@930 -- # kill -0 541009 00:18:26.268 17:10:40 -- common/autotest_common.sh@931 -- # uname 00:18:26.268 17:10:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:26.268 17:10:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 541009 00:18:26.268 17:10:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:26.268 17:10:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:26.268 17:10:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 541009' 00:18:26.268 killing process with pid 541009 00:18:26.268 17:10:40 -- common/autotest_common.sh@945 -- # kill 541009 00:18:26.268 17:10:40 -- common/autotest_common.sh@950 -- # wait 541009 00:18:26.268 17:10:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:26.268 17:10:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:26.268 17:10:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:26.268 17:10:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.268 17:10:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:26.268 17:10:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.268 17:10:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.268 17:10:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.832 17:10:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:26.832 00:18:26.832 real 0m17.262s 00:18:26.832 user 0m24.862s 00:18:26.832 sys 0m3.099s 00:18:26.832 17:10:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.832 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:18:26.832 ************************************ 00:18:26.832 END TEST nvmf_queue_depth 00:18:26.832 ************************************ 00:18:26.832 17:10:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:26.832 17:10:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:26.832 17:10:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.832 17:10:42 -- common/autotest_common.sh@10 -- # set +x 00:18:26.832 ************************************ 00:18:26.832 START TEST nvmf_multipath 00:18:26.832 ************************************ 00:18:26.832 17:10:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:27.091 * Looking for test storage... 00:18:27.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.091 17:10:43 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.091 17:10:43 -- nvmf/common.sh@7 -- # uname -s 00:18:27.091 17:10:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.091 17:10:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.091 17:10:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.091 17:10:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.091 17:10:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.091 17:10:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.091 17:10:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.091 17:10:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.091 17:10:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.091 17:10:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.091 17:10:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.091 17:10:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:27.091 17:10:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.091 17:10:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.091 17:10:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.091 17:10:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.091 17:10:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.091 17:10:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.091 17:10:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.091 17:10:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.091 17:10:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.092 17:10:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.092 17:10:43 -- paths/export.sh@5 -- # export PATH 00:18:27.092 17:10:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.092 17:10:43 -- nvmf/common.sh@46 -- # : 0 00:18:27.092 17:10:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:27.092 17:10:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:27.092 17:10:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:27.092 17:10:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.092 17:10:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.092 17:10:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:27.092 17:10:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:27.092 17:10:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:27.092 17:10:43 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.092 17:10:43 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.092 17:10:43 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:27.092 17:10:43 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:27.092 17:10:43 -- target/multipath.sh@43 -- # nvmftestinit 00:18:27.092 17:10:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:27.092 17:10:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.092 17:10:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:27.092 17:10:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:27.092 17:10:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:27.092 17:10:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.092 17:10:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.092 17:10:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.092 17:10:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:27.092 17:10:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:27.092 17:10:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:27.092 17:10:43 -- common/autotest_common.sh@10 -- # set +x 00:18:29.018 17:10:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:29.018 17:10:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:29.018 17:10:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:29.018 17:10:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:29.018 17:10:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:29.018 17:10:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:29.018 17:10:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:29.018 17:10:44 -- nvmf/common.sh@294 -- # net_devs=() 00:18:29.018 17:10:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:29.018 17:10:44 -- nvmf/common.sh@295 -- # e810=() 00:18:29.018 17:10:44 -- nvmf/common.sh@295 -- # local -ga e810 00:18:29.018 17:10:44 -- nvmf/common.sh@296 -- # x722=() 00:18:29.018 17:10:44 -- nvmf/common.sh@296 -- # local -ga x722 00:18:29.018 17:10:44 -- nvmf/common.sh@297 -- # mlx=() 00:18:29.018 17:10:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:29.018 17:10:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.018 17:10:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:29.018 17:10:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:29.018 17:10:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:29.018 17:10:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:29.018 17:10:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:29.018 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:29.018 17:10:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:29.018 17:10:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:29.018 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:29.018 17:10:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:29.018 17:10:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:29.018 17:10:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.018 17:10:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:29.018 17:10:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.018 17:10:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:29.018 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:29.018 17:10:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.018 17:10:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:29.018 17:10:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.018 17:10:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:29.018 17:10:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.018 17:10:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:29.018 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:29.018 17:10:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.018 17:10:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:29.018 17:10:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:29.018 17:10:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:29.018 17:10:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:29.018 17:10:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.018 17:10:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.018 17:10:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.018 17:10:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:29.018 17:10:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.018 17:10:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.018 17:10:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:29.018 17:10:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.018 17:10:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.018 17:10:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:29.018 17:10:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:29.018 17:10:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.018 17:10:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.018 17:10:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.018 17:10:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.018 17:10:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:29.018 17:10:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.018 17:10:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.018 17:10:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.018 17:10:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:29.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:18:29.018 00:18:29.018 --- 10.0.0.2 ping statistics --- 00:18:29.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.018 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:18:29.018 17:10:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:18:29.018 00:18:29.018 --- 10.0.0.1 ping statistics --- 00:18:29.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.018 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:18:29.018 17:10:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.018 17:10:45 -- nvmf/common.sh@410 -- # return 0 00:18:29.018 17:10:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:29.018 17:10:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.019 17:10:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:29.019 17:10:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:29.019 17:10:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.019 17:10:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:29.019 17:10:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:29.019 17:10:45 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:29.019 17:10:45 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:29.019 only one NIC for nvmf test 00:18:29.019 17:10:45 -- target/multipath.sh@47 -- # nvmftestfini 00:18:29.019 17:10:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:29.019 17:10:45 -- nvmf/common.sh@116 -- # sync 00:18:29.019 17:10:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:29.019 17:10:45 -- nvmf/common.sh@119 -- # set +e 00:18:29.019 17:10:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:29.019 17:10:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:29.019 rmmod nvme_tcp 00:18:29.019 rmmod nvme_fabrics 00:18:29.019 rmmod nvme_keyring 00:18:29.019 17:10:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:29.019 17:10:45 -- nvmf/common.sh@123 -- # set -e 00:18:29.019 17:10:45 -- nvmf/common.sh@124 -- # return 0 00:18:29.019 17:10:45 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:29.019 17:10:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:29.019 17:10:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:29.019 17:10:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:29.019 17:10:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.019 17:10:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:29.019 17:10:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.019 17:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.275 17:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.174 17:10:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:31.174 17:10:47 -- target/multipath.sh@48 -- # exit 0 00:18:31.174 17:10:47 -- target/multipath.sh@1 -- # nvmftestfini 00:18:31.174 17:10:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:31.174 17:10:47 -- nvmf/common.sh@116 -- # sync 00:18:31.174 17:10:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:31.174 17:10:47 -- nvmf/common.sh@119 -- # set +e 00:18:31.174 17:10:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:31.174 17:10:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:31.174 17:10:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:31.174 17:10:47 -- nvmf/common.sh@123 -- # set -e 00:18:31.174 17:10:47 -- nvmf/common.sh@124 -- # return 0 00:18:31.174 17:10:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:31.174 17:10:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:31.174 17:10:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:31.174 17:10:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:31.174 17:10:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.174 17:10:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:31.174 17:10:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.174 17:10:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.174 17:10:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.174 17:10:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:31.174 00:18:31.174 real 0m4.286s 00:18:31.174 user 0m0.784s 00:18:31.174 sys 0m1.493s 00:18:31.174 17:10:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.174 17:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:31.174 ************************************ 00:18:31.174 END TEST nvmf_multipath 00:18:31.174 ************************************ 00:18:31.174 17:10:47 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:31.174 17:10:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:31.174 17:10:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:31.174 17:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:31.174 ************************************ 00:18:31.174 START TEST nvmf_zcopy 00:18:31.174 ************************************ 00:18:31.174 17:10:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:31.174 * Looking for test storage... 00:18:31.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.174 17:10:47 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.174 17:10:47 -- nvmf/common.sh@7 -- # uname -s 00:18:31.174 17:10:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.174 17:10:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.174 17:10:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.174 17:10:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.174 17:10:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.174 17:10:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.174 17:10:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.174 17:10:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.174 17:10:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.174 17:10:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.174 17:10:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.174 17:10:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:31.174 17:10:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.174 17:10:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.174 17:10:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.174 17:10:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.174 17:10:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.174 17:10:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.174 17:10:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.174 17:10:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.174 17:10:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.174 17:10:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.174 17:10:47 -- paths/export.sh@5 -- # export PATH 00:18:31.174 17:10:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.174 17:10:47 -- nvmf/common.sh@46 -- # : 0 00:18:31.174 17:10:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:31.174 17:10:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:31.174 17:10:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:31.175 17:10:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.175 17:10:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.175 17:10:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:31.175 17:10:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:31.175 17:10:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:31.175 17:10:47 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:31.175 17:10:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:31.175 17:10:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.175 17:10:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:31.175 17:10:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:31.175 17:10:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:31.175 17:10:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.175 17:10:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.175 17:10:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.175 17:10:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:31.175 17:10:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:31.175 17:10:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:31.175 17:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:33.078 17:10:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:33.078 17:10:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:33.078 17:10:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:33.078 17:10:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:33.078 17:10:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:33.078 17:10:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:33.078 17:10:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:33.078 17:10:49 -- nvmf/common.sh@294 -- # net_devs=() 00:18:33.078 17:10:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:33.078 17:10:49 -- nvmf/common.sh@295 -- # e810=() 00:18:33.078 17:10:49 -- nvmf/common.sh@295 -- # local -ga e810 00:18:33.078 17:10:49 -- nvmf/common.sh@296 -- # x722=() 00:18:33.078 17:10:49 -- nvmf/common.sh@296 -- # local -ga x722 00:18:33.078 17:10:49 -- nvmf/common.sh@297 -- # mlx=() 00:18:33.078 17:10:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:33.078 17:10:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.078 17:10:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:33.078 17:10:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:33.078 17:10:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:33.078 17:10:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:33.078 17:10:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:33.078 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:33.078 17:10:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:33.078 17:10:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:33.078 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:33.078 17:10:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:33.078 17:10:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:33.078 17:10:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:33.338 17:10:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.338 17:10:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:33.338 17:10:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.338 17:10:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:33.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:33.338 17:10:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.338 17:10:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:33.338 17:10:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.338 17:10:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:33.338 17:10:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.338 17:10:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:33.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:33.338 17:10:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.338 17:10:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:33.338 17:10:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:33.338 17:10:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:33.338 17:10:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:33.338 17:10:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:33.338 17:10:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.338 17:10:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.338 17:10:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.338 17:10:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:33.338 17:10:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.338 17:10:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.338 17:10:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:33.338 17:10:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.338 17:10:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.338 17:10:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:33.338 17:10:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:33.338 17:10:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.338 17:10:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.338 17:10:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.338 17:10:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.338 17:10:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:33.338 17:10:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.338 17:10:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.338 17:10:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.338 17:10:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:33.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:18:33.338 00:18:33.338 --- 10.0.0.2 ping statistics --- 00:18:33.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.338 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:33.338 17:10:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:18:33.338 00:18:33.338 --- 10.0.0.1 ping statistics --- 00:18:33.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.338 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:18:33.338 17:10:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.338 17:10:49 -- nvmf/common.sh@410 -- # return 0 00:18:33.338 17:10:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:33.338 17:10:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.338 17:10:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:33.338 17:10:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:33.338 17:10:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.338 17:10:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:33.338 17:10:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:33.338 17:10:49 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:33.338 17:10:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:33.338 17:10:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:33.338 17:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:33.338 17:10:49 -- nvmf/common.sh@469 -- # nvmfpid=546442 00:18:33.338 17:10:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.338 17:10:49 -- nvmf/common.sh@470 -- # waitforlisten 546442 00:18:33.338 17:10:49 -- common/autotest_common.sh@819 -- # '[' -z 546442 ']' 00:18:33.338 17:10:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.338 17:10:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:33.338 17:10:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.338 17:10:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:33.338 17:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:33.338 [2024-07-20 17:10:49.438682] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:33.338 [2024-07-20 17:10:49.438755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.338 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.597 [2024-07-20 17:10:49.506121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.597 [2024-07-20 17:10:49.592985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:33.597 [2024-07-20 17:10:49.593133] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.597 [2024-07-20 17:10:49.593150] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.597 [2024-07-20 17:10:49.593164] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.597 [2024-07-20 17:10:49.593193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.533 17:10:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:34.533 17:10:50 -- common/autotest_common.sh@852 -- # return 0 00:18:34.533 17:10:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:34.533 17:10:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 17:10:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.533 17:10:50 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:34.533 17:10:50 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:34.533 17:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 [2024-07-20 17:10:50.440364] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.533 17:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.533 17:10:50 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:34.533 17:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 17:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.533 17:10:50 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.533 17:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 [2024-07-20 17:10:50.456527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.533 17:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.533 17:10:50 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:34.533 17:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 17:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.533 17:10:50 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:34.533 17:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 malloc0 00:18:34.533 17:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.533 17:10:50 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.533 17:10:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.533 17:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:34.533 17:10:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.533 17:10:50 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:34.533 17:10:50 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:34.533 17:10:50 -- nvmf/common.sh@520 -- # config=() 00:18:34.533 17:10:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:34.533 17:10:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:34.533 17:10:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:34.533 { 00:18:34.533 "params": { 00:18:34.533 "name": "Nvme$subsystem", 00:18:34.533 "trtype": "$TEST_TRANSPORT", 00:18:34.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.533 "adrfam": "ipv4", 00:18:34.533 "trsvcid": "$NVMF_PORT", 00:18:34.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.533 "hdgst": ${hdgst:-false}, 00:18:34.533 "ddgst": ${ddgst:-false} 00:18:34.533 }, 00:18:34.533 "method": "bdev_nvme_attach_controller" 00:18:34.533 } 00:18:34.533 EOF 00:18:34.533 )") 00:18:34.533 17:10:50 -- nvmf/common.sh@542 -- # cat 00:18:34.533 17:10:50 -- nvmf/common.sh@544 -- # jq . 00:18:34.533 17:10:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:34.533 17:10:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:34.533 "params": { 00:18:34.533 "name": "Nvme1", 00:18:34.533 "trtype": "tcp", 00:18:34.533 "traddr": "10.0.0.2", 00:18:34.533 "adrfam": "ipv4", 00:18:34.533 "trsvcid": "4420", 00:18:34.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.533 "hdgst": false, 00:18:34.533 "ddgst": false 00:18:34.533 }, 00:18:34.533 "method": "bdev_nvme_attach_controller" 00:18:34.533 }' 00:18:34.533 [2024-07-20 17:10:50.536578] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:34.533 [2024-07-20 17:10:50.536665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546564 ] 00:18:34.533 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.533 [2024-07-20 17:10:50.603621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.792 [2024-07-20 17:10:50.695459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.792 Running I/O for 10 seconds... 00:18:46.983 00:18:46.984 Latency(us) 00:18:46.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.984 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:46.984 Verification LBA range: start 0x0 length 0x1000 00:18:46.984 Nvme1n1 : 10.01 9097.18 71.07 0.00 0.00 14037.72 1401.74 27767.85 00:18:46.984 =================================================================================================================== 00:18:46.984 Total : 9097.18 71.07 0.00 0.00 14037.72 1401.74 27767.85 00:18:46.984 17:11:01 -- target/zcopy.sh@39 -- # perfpid=547870 00:18:46.984 17:11:01 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:46.984 17:11:01 -- common/autotest_common.sh@10 -- # set +x 00:18:46.984 17:11:01 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:46.984 17:11:01 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:46.984 17:11:01 -- nvmf/common.sh@520 -- # config=() 00:18:46.984 17:11:01 -- nvmf/common.sh@520 -- # local subsystem config 00:18:46.984 17:11:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:46.984 17:11:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:46.984 { 00:18:46.984 "params": { 00:18:46.984 "name": "Nvme$subsystem", 00:18:46.984 "trtype": "$TEST_TRANSPORT", 00:18:46.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.984 "adrfam": "ipv4", 00:18:46.984 "trsvcid": "$NVMF_PORT", 00:18:46.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.984 "hdgst": ${hdgst:-false}, 00:18:46.984 "ddgst": ${ddgst:-false} 00:18:46.984 }, 00:18:46.984 "method": "bdev_nvme_attach_controller" 00:18:46.984 } 00:18:46.984 EOF 00:18:46.984 )") 00:18:46.984 17:11:01 -- nvmf/common.sh@542 -- # cat 00:18:46.984 [2024-07-20 17:11:01.175036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.175094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 17:11:01 -- nvmf/common.sh@544 -- # jq . 00:18:46.984 17:11:01 -- nvmf/common.sh@545 -- # IFS=, 00:18:46.984 17:11:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:46.984 "params": { 00:18:46.984 "name": "Nvme1", 00:18:46.984 "trtype": "tcp", 00:18:46.984 "traddr": "10.0.0.2", 00:18:46.984 "adrfam": "ipv4", 00:18:46.984 "trsvcid": "4420", 00:18:46.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.984 "hdgst": false, 00:18:46.984 "ddgst": false 00:18:46.984 }, 00:18:46.984 "method": "bdev_nvme_attach_controller" 00:18:46.984 }' 00:18:46.984 [2024-07-20 17:11:01.182994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.183019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.191016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.191040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.199035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.199057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.207056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.207092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.210475] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:46.984 [2024-07-20 17:11:01.210530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547870 ] 00:18:46.984 [2024-07-20 17:11:01.215092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.215117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.223115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.223134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.231139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.231158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.239169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.239189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.984 [2024-07-20 17:11:01.247202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.247226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.255228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.255253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.263242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.263267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.271264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.271289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.274454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.984 [2024-07-20 17:11:01.279298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.279326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.287342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.287380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.295334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.295359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.303355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.303380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.311380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.311406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.319403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.319428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.327447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.327483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.335460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.335489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.343469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.343494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.351488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.351513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.359511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.359536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.364105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.984 [2024-07-20 17:11:01.367520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.367541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.375557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.375582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.383610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.383647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.391629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.391676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.399649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.399690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.407675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.407714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.415701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.415738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.423729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.423769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.431719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.431743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.439765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.439819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.447803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.447866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.455785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.984 [2024-07-20 17:11:01.455818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.984 [2024-07-20 17:11:01.463974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.463998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.471867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.471892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.479896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.479921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.487892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.487915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.495906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.495931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.503958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.503982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.511969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.511994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.519969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.519992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.529349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.529380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.536016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.536039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 Running I/O for 5 seconds... 00:18:46.985 [2024-07-20 17:11:01.544040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.544064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.567867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.567915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.585249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.585287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.603376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.603407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.616886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.616913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.630464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.630502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.643424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.643472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.660721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.660754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.675387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.675414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.688242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.688275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.705277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.705323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.719444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.719477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.735453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.735486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.748138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.748170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.766066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.766101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.782099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.782148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.799310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.799344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.812972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.813000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.830009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.830037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.844380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.844407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.856409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.856442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.871675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.871708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.887043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.887081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.899565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.899598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.913237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.913268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.926554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.926579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.939919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.939952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.959381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.959414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.971111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.971162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:01.990550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:01.990583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.009641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.009666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.023239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.023270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.036890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.036916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.055847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.055876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.073652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.073699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.092418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.092451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.105959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.105993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.119327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.119353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.131245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.131277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.147882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.147914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.163879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.163925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.179654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.179687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.193174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.193198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.204676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.204722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.985 [2024-07-20 17:11:02.221289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.985 [2024-07-20 17:11:02.221321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.234888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.234921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.254644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.254677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.270490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.270521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.282616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.282647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.301060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.301106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.316451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.316483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.329765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.329817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.350695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.350728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.366621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.366653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.380182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.380228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.395134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.395180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.409953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.409981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.426203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.426229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.438297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.438342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.458143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.458175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.471623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.471655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.493244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.493291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.506159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.506191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.526076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.526133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.540015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.540049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.560397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.560430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.575837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.575884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.590543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.590574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.609843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.609877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.627610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.627642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.639653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.639684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.656917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.656948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.670999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.671026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.682241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.682272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.701421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.701453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.714233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.714264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.733895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.733921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.747678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.747710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.768344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.768375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.783617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.783643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.796700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.796746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.815596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.815630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.830461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.830517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.847729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.847762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.862570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.862597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.879113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.879138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.892195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.892220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.907141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.907171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.924078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.924111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.937634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.937666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.952751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.952810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.966488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.966513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:02.983952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:02.983980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.000991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.001020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.015049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.015096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.028803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.028841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.043392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.043425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.061168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.061199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.078769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.078824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.986 [2024-07-20 17:11:03.091013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.986 [2024-07-20 17:11:03.091046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.987 [2024-07-20 17:11:03.108203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.987 [2024-07-20 17:11:03.108235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.987 [2024-07-20 17:11:03.121710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.987 [2024-07-20 17:11:03.121748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.987 [2024-07-20 17:11:03.137514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.987 [2024-07-20 17:11:03.137548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.159658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.159690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.174285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.174315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.189213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.189245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.200639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.200673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.217591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.217623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.232788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.232821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.245268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.245299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.263139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.263175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.280268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.280300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.296870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.296903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.310019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.310050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.328186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.328219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.341087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.341139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.358400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.358432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.374546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.374571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.386716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.386748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.244 [2024-07-20 17:11:03.402112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.244 [2024-07-20 17:11:03.402145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.419658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.419699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.433517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.433549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.453526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.453558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.469296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.469327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.481425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.481457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.496196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.496228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.509035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.509061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.522451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.522477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.536276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.536307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.552069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.552115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.564656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.564689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.582122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.582154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.596065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.596100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.613325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.613356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.625302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.625326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.642131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.642162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.502 [2024-07-20 17:11:03.655473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.502 [2024-07-20 17:11:03.655520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.759 [2024-07-20 17:11:03.671214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.671259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.682883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.682911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.699671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.699713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.711552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.711584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.732582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.732614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.746362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.746386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.758602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.758635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.778001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.778033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.790530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.790576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.802754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.802810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.819255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.819299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.832052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.832085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.844298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.844343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.858287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.858317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.874030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.874062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.890648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.890681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.760 [2024-07-20 17:11:03.909922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.760 [2024-07-20 17:11:03.909955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:03.925891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:03.925919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:03.940353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:03.940381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:03.952861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:03.952887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:03.967958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:03.967984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:03.981726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:03.981757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:03.998958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:03.998991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.012923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.012957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.031291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.031318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.044935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.044962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.057493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.057524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.072418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.072452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.085339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.085371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.099506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.099539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.113880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.113908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.125028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.125076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.140030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.140078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.151556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.151588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.017 [2024-07-20 17:11:04.172949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.017 [2024-07-20 17:11:04.172983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.186003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.186037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.202987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.203021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.217939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.217984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.233476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.233501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.247580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.247605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.261897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.261931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.277759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.277814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.292414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.292462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.305598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.305630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.318545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.318571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.330941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.330975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.344751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.344791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.359644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.359676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.374186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.374219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.390189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.390234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.403646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.403678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.275 [2024-07-20 17:11:04.420204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.275 [2024-07-20 17:11:04.420252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.435585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.435615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.448363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.448389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.466179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.466211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.481410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.481435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.498623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.498655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.512964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.512992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.529939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.529986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.546923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.546954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.560150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.560176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.572632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.572658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.586152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.586178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.598408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.598454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.617676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.617722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.629813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.629860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.648835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.648882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.663240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.663272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.681084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.681117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.537 [2024-07-20 17:11:04.695267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.537 [2024-07-20 17:11:04.695292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.706513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.706544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.721644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.721670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.733046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.733078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.749652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.749682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.763428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.763453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.777200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.777232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.797187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.797219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.813366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.813406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.826055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.826095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.844726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.844757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.857016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.857048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.878017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.878063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.891336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.794 [2024-07-20 17:11:04.891361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.794 [2024-07-20 17:11:04.906548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.795 [2024-07-20 17:11:04.906572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.795 [2024-07-20 17:11:04.918114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.795 [2024-07-20 17:11:04.918145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.795 [2024-07-20 17:11:04.936305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.795 [2024-07-20 17:11:04.936350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.795 [2024-07-20 17:11:04.948784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.795 [2024-07-20 17:11:04.948840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:04.965023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:04.965056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:04.978622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:04.978653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:04.997630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:04.997662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.013855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.013888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.028979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.029011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.050695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.050728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.066009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.066043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.080957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.080990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.097094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.097125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.115742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.115804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.135948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.135978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.151967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.152015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.172518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.172550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.185011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.185044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.052 [2024-07-20 17:11:05.204973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.052 [2024-07-20 17:11:05.205008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.220111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.220144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.241156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.241189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.259214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.259246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.274577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.274625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.291248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.291294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.304263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.304288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.317380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.317413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.330217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.330263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.347627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.347659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.359865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.359910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.373071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.373116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.389512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.389559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.309 [2024-07-20 17:11:05.411638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.309 [2024-07-20 17:11:05.411671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.310 [2024-07-20 17:11:05.431563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.310 [2024-07-20 17:11:05.431605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.310 [2024-07-20 17:11:05.450082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.310 [2024-07-20 17:11:05.450115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.310 [2024-07-20 17:11:05.466208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.310 [2024-07-20 17:11:05.466235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.477838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.477865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.493183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.493215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.506515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.506540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.523115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.523148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.538024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.538049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.553205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.553237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.571859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.571892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.586476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.586510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.603941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.603976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.617438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.617468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.631811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.631837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.645051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.645098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.661352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.661384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.673509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.673542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.693904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.693938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.567 [2024-07-20 17:11:05.709417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.567 [2024-07-20 17:11:05.709449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.731072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.731128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.743976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.744008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.762641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.762672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.775612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.775637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.789429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.789454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.807875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.807921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.825442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.825487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.838967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.839000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.858718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.858749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.873121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.873167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.891612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.891644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.905523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.905547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.920195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.920241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.935911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.935944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.948173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.948218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.965148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.965194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.845 [2024-07-20 17:11:05.978075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.845 [2024-07-20 17:11:05.978107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.102 [2024-07-20 17:11:05.999600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.102 [2024-07-20 17:11:05.999632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.102 [2024-07-20 17:11:06.012489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.102 [2024-07-20 17:11:06.012521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.102 [2024-07-20 17:11:06.033222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.102 [2024-07-20 17:11:06.033274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.102 [2024-07-20 17:11:06.047484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.102 [2024-07-20 17:11:06.047514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.061074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.061101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.077195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.077241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.095239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.095265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.114067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.114098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.127624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.127648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.140688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.140714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.155590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.155622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.168781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.168829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.180872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.180905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.195159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.195190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.210665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.210696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.224651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.224682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.241865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.241897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.103 [2024-07-20 17:11:06.256123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.103 [2024-07-20 17:11:06.256149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.360 [2024-07-20 17:11:06.273331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.360 [2024-07-20 17:11:06.273376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.360 [2024-07-20 17:11:06.288213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.360 [2024-07-20 17:11:06.288245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.360 [2024-07-20 17:11:06.307473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.360 [2024-07-20 17:11:06.307505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.360 [2024-07-20 17:11:06.322765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.360 [2024-07-20 17:11:06.322824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.360 [2024-07-20 17:11:06.344692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.360 [2024-07-20 17:11:06.344726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.360 [2024-07-20 17:11:06.364758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.364814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.380054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.380094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.395375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.395408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.413971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.414019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.429600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.429643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.443247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.443279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.463042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.463075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.482763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.482820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.496465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.496490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.361 [2024-07-20 17:11:06.507749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.361 [2024-07-20 17:11:06.507780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.520681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.520714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.539698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.539728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.555942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.555975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.567228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.567274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 00:18:50.649 Latency(us) 00:18:50.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.649 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:50.649 Nvme1n1 : 5.01 8120.39 63.44 0.00 0.00 15748.54 4951.61 38059.43 00:18:50.649 =================================================================================================================== 00:18:50.649 Total : 8120.39 63.44 0.00 0.00 15748.54 4951.61 38059.43 00:18:50.649 [2024-07-20 17:11:06.574901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.574926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.582928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.582953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.590996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.591039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.599030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.599071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.607043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.607086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.615069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.615113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.623093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.623136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.631112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.631158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.639133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.639177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.647166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.647212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.655187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.655234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.663208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.663251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.671220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.671264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.679236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.679279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.687263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.687308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.695289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.695334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.703256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.703281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.711278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.711302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.719357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.719401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.727366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.727408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.735370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.735406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.743361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.743381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.751435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.751477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.759457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.649 [2024-07-20 17:11:06.759510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.649 [2024-07-20 17:11:06.767453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.650 [2024-07-20 17:11:06.767485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.650 [2024-07-20 17:11:06.775465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.650 [2024-07-20 17:11:06.775492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.650 [2024-07-20 17:11:06.783486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.650 [2024-07-20 17:11:06.783512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (547870) - No such process 00:18:50.912 17:11:06 -- target/zcopy.sh@49 -- # wait 547870 00:18:50.912 17:11:06 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.912 17:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.912 17:11:06 -- common/autotest_common.sh@10 -- # set +x 00:18:50.912 17:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.912 17:11:06 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:50.912 17:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.912 17:11:06 -- common/autotest_common.sh@10 -- # set +x 00:18:50.912 delay0 00:18:50.912 17:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.912 17:11:06 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:50.912 17:11:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.912 17:11:06 -- common/autotest_common.sh@10 -- # set +x 00:18:50.912 17:11:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.912 17:11:06 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:50.912 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.912 [2024-07-20 17:11:06.864855] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:57.460 Initializing NVMe Controllers 00:18:57.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:57.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:57.460 Initialization complete. Launching workers. 00:18:57.460 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:18:57.460 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:18:57.460 success 129, unsuccess 228, failed 0 00:18:57.460 17:11:13 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:57.460 17:11:13 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:57.460 17:11:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.460 17:11:13 -- nvmf/common.sh@116 -- # sync 00:18:57.460 17:11:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.460 17:11:13 -- nvmf/common.sh@119 -- # set +e 00:18:57.460 17:11:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.460 17:11:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.460 rmmod nvme_tcp 00:18:57.460 rmmod nvme_fabrics 00:18:57.460 rmmod nvme_keyring 00:18:57.460 17:11:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.460 17:11:13 -- nvmf/common.sh@123 -- # set -e 00:18:57.460 17:11:13 -- nvmf/common.sh@124 -- # return 0 00:18:57.460 17:11:13 -- nvmf/common.sh@477 -- # '[' -n 546442 ']' 00:18:57.460 17:11:13 -- nvmf/common.sh@478 -- # killprocess 546442 00:18:57.460 17:11:13 -- common/autotest_common.sh@926 -- # '[' -z 546442 ']' 00:18:57.460 17:11:13 -- common/autotest_common.sh@930 -- # kill -0 546442 00:18:57.460 17:11:13 -- common/autotest_common.sh@931 -- # uname 00:18:57.460 17:11:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.460 17:11:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 546442 00:18:57.460 17:11:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:57.460 17:11:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:57.460 17:11:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 546442' 00:18:57.460 killing process with pid 546442 00:18:57.460 17:11:13 -- common/autotest_common.sh@945 -- # kill 546442 00:18:57.460 17:11:13 -- common/autotest_common.sh@950 -- # wait 546442 00:18:57.460 17:11:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:57.460 17:11:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:57.460 17:11:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:57.460 17:11:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.460 17:11:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:57.460 17:11:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.460 17:11:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.460 17:11:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.356 17:11:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:59.356 00:18:59.356 real 0m28.190s 00:18:59.356 user 0m41.602s 00:18:59.356 sys 0m7.924s 00:18:59.356 17:11:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.356 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.356 ************************************ 00:18:59.356 END TEST nvmf_zcopy 00:18:59.356 ************************************ 00:18:59.357 17:11:15 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:59.357 17:11:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:59.357 17:11:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.357 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:59.357 ************************************ 00:18:59.357 START TEST nvmf_nmic 00:18:59.357 ************************************ 00:18:59.357 17:11:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:59.615 * Looking for test storage... 00:18:59.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.615 17:11:15 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.615 17:11:15 -- nvmf/common.sh@7 -- # uname -s 00:18:59.615 17:11:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.615 17:11:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.615 17:11:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.615 17:11:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.615 17:11:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.615 17:11:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.615 17:11:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.615 17:11:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.615 17:11:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.615 17:11:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.615 17:11:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.615 17:11:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.615 17:11:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.615 17:11:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.615 17:11:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.615 17:11:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.615 17:11:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.615 17:11:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.615 17:11:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.615 17:11:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.615 17:11:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.615 17:11:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.615 17:11:15 -- paths/export.sh@5 -- # export PATH 00:18:59.615 17:11:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.615 17:11:15 -- nvmf/common.sh@46 -- # : 0 00:18:59.615 17:11:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:59.615 17:11:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:59.615 17:11:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:59.615 17:11:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.615 17:11:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.615 17:11:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:59.615 17:11:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:59.615 17:11:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:59.615 17:11:15 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.615 17:11:15 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.615 17:11:15 -- target/nmic.sh@14 -- # nvmftestinit 00:18:59.615 17:11:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:59.615 17:11:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.615 17:11:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:59.615 17:11:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:59.615 17:11:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:59.615 17:11:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.615 17:11:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.615 17:11:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.615 17:11:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:59.615 17:11:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:59.615 17:11:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:59.615 17:11:15 -- common/autotest_common.sh@10 -- # set +x 00:19:01.513 17:11:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:01.513 17:11:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:01.513 17:11:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:01.513 17:11:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:01.513 17:11:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:01.513 17:11:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:01.513 17:11:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:01.513 17:11:17 -- nvmf/common.sh@294 -- # net_devs=() 00:19:01.513 17:11:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:01.513 17:11:17 -- nvmf/common.sh@295 -- # e810=() 00:19:01.513 17:11:17 -- nvmf/common.sh@295 -- # local -ga e810 00:19:01.513 17:11:17 -- nvmf/common.sh@296 -- # x722=() 00:19:01.513 17:11:17 -- nvmf/common.sh@296 -- # local -ga x722 00:19:01.513 17:11:17 -- nvmf/common.sh@297 -- # mlx=() 00:19:01.513 17:11:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:01.513 17:11:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.513 17:11:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:01.513 17:11:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:01.513 17:11:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:01.513 17:11:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.513 17:11:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.513 17:11:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.513 17:11:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.513 17:11:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:01.513 17:11:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:01.513 17:11:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.513 17:11:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.513 17:11:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.513 17:11:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.513 17:11:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.513 17:11:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.513 17:11:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.513 17:11:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.513 17:11:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.513 17:11:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.513 17:11:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.513 17:11:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.513 17:11:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:01.513 17:11:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:01.514 17:11:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:01.514 17:11:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:01.514 17:11:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:01.514 17:11:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.514 17:11:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.514 17:11:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.514 17:11:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:01.514 17:11:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.514 17:11:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.514 17:11:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:01.514 17:11:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.514 17:11:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.514 17:11:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:01.514 17:11:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:01.514 17:11:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.514 17:11:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.514 17:11:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.514 17:11:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.514 17:11:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:01.514 17:11:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.514 17:11:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.514 17:11:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.514 17:11:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:01.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:19:01.514 00:19:01.514 --- 10.0.0.2 ping statistics --- 00:19:01.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.514 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:19:01.514 17:11:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:19:01.514 00:19:01.514 --- 10.0.0.1 ping statistics --- 00:19:01.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.514 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:01.514 17:11:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.514 17:11:17 -- nvmf/common.sh@410 -- # return 0 00:19:01.514 17:11:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:01.514 17:11:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.514 17:11:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:01.514 17:11:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:01.514 17:11:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.514 17:11:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:01.514 17:11:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:01.514 17:11:17 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:01.514 17:11:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:01.514 17:11:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:01.514 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.514 17:11:17 -- nvmf/common.sh@469 -- # nvmfpid=551186 00:19:01.514 17:11:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.514 17:11:17 -- nvmf/common.sh@470 -- # waitforlisten 551186 00:19:01.514 17:11:17 -- common/autotest_common.sh@819 -- # '[' -z 551186 ']' 00:19:01.514 17:11:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.514 17:11:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:01.514 17:11:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.514 17:11:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:01.514 17:11:17 -- common/autotest_common.sh@10 -- # set +x 00:19:01.514 [2024-07-20 17:11:17.645919] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:01.514 [2024-07-20 17:11:17.646018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.771 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.771 [2024-07-20 17:11:17.713329] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.771 [2024-07-20 17:11:17.802073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.771 [2024-07-20 17:11:17.802228] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.771 [2024-07-20 17:11:17.802246] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.771 [2024-07-20 17:11:17.802259] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.771 [2024-07-20 17:11:17.802309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.771 [2024-07-20 17:11:17.802372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.771 [2024-07-20 17:11:17.802607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.771 [2024-07-20 17:11:17.802611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.702 17:11:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:02.702 17:11:18 -- common/autotest_common.sh@852 -- # return 0 00:19:02.702 17:11:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:02.702 17:11:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 17:11:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.702 17:11:18 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 [2024-07-20 17:11:18.603302] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 Malloc0 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 [2024-07-20 17:11:18.654899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:02.702 test case1: single bdev can't be used in multiple subsystems 00:19:02.702 17:11:18 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@28 -- # nmic_status=0 00:19:02.702 17:11:18 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 [2024-07-20 17:11:18.678748] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:02.702 [2024-07-20 17:11:18.678808] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:02.702 [2024-07-20 17:11:18.678827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.702 request: 00:19:02.702 { 00:19:02.702 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:02.702 "namespace": { 00:19:02.702 "bdev_name": "Malloc0" 00:19:02.702 }, 00:19:02.702 "method": "nvmf_subsystem_add_ns", 00:19:02.702 "req_id": 1 00:19:02.702 } 00:19:02.702 Got JSON-RPC error response 00:19:02.702 response: 00:19:02.702 { 00:19:02.702 "code": -32602, 00:19:02.702 "message": "Invalid parameters" 00:19:02.702 } 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@29 -- # nmic_status=1 00:19:02.702 17:11:18 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:02.702 17:11:18 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:02.702 Adding namespace failed - expected result. 00:19:02.702 17:11:18 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:02.702 test case2: host connect to nvmf target in multiple paths 00:19:02.702 17:11:18 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:02.702 17:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.702 17:11:18 -- common/autotest_common.sh@10 -- # set +x 00:19:02.702 [2024-07-20 17:11:18.686892] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:02.702 17:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.702 17:11:18 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:03.265 17:11:19 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:03.828 17:11:19 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.828 17:11:19 -- common/autotest_common.sh@1177 -- # local i=0 00:19:03.828 17:11:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.828 17:11:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:03.828 17:11:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:06.365 17:11:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:06.365 17:11:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:06.365 17:11:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.365 17:11:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:06.365 17:11:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.365 17:11:21 -- common/autotest_common.sh@1187 -- # return 0 00:19:06.365 17:11:21 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:06.365 [global] 00:19:06.365 thread=1 00:19:06.365 invalidate=1 00:19:06.365 rw=write 00:19:06.365 time_based=1 00:19:06.365 runtime=1 00:19:06.365 ioengine=libaio 00:19:06.365 direct=1 00:19:06.365 bs=4096 00:19:06.365 iodepth=1 00:19:06.365 norandommap=0 00:19:06.365 numjobs=1 00:19:06.365 00:19:06.365 verify_dump=1 00:19:06.365 verify_backlog=512 00:19:06.365 verify_state_save=0 00:19:06.365 do_verify=1 00:19:06.365 verify=crc32c-intel 00:19:06.365 [job0] 00:19:06.365 filename=/dev/nvme0n1 00:19:06.365 Could not set queue depth (nvme0n1) 00:19:06.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.365 fio-3.35 00:19:06.365 Starting 1 thread 00:19:07.296 00:19:07.296 job0: (groupid=0, jobs=1): err= 0: pid=551842: Sat Jul 20 17:11:23 2024 00:19:07.296 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:07.296 slat (nsec): min=5293, max=74702, avg=19369.11, stdev=10117.84 00:19:07.296 clat (usec): min=411, max=715, avg=540.59, stdev=80.44 00:19:07.296 lat (usec): min=422, max=747, avg=559.96, stdev=88.86 00:19:07.296 clat percentiles (usec): 00:19:07.296 | 1.00th=[ 424], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 465], 00:19:07.296 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 502], 60.00th=[ 594], 00:19:07.296 | 70.00th=[ 619], 80.00th=[ 627], 90.00th=[ 644], 95.00th=[ 652], 00:19:07.296 | 99.00th=[ 685], 99.50th=[ 685], 99.90th=[ 709], 99.95th=[ 717], 00:19:07.296 | 99.99th=[ 717] 00:19:07.296 write: IOPS=1217, BW=4871KiB/s (4988kB/s)(4876KiB/1001msec); 0 zone resets 00:19:07.296 slat (nsec): min=6030, max=78918, avg=13338.60, stdev=7440.48 00:19:07.296 clat (usec): min=287, max=484, avg=328.31, stdev=35.32 00:19:07.296 lat (usec): min=295, max=507, avg=341.65, stdev=37.38 00:19:07.296 clat percentiles (usec): 00:19:07.296 | 1.00th=[ 293], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 297], 00:19:07.296 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 330], 00:19:07.296 | 70.00th=[ 343], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 388], 00:19:07.296 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 453], 99.95th=[ 486], 00:19:07.296 | 99.99th=[ 486] 00:19:07.296 bw ( KiB/s): min= 4096, max= 4096, per=84.09%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.296 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.296 lat (usec) : 500=76.86%, 750=23.14% 00:19:07.296 cpu : usr=1.60%, sys=4.10%, ctx=2243, majf=0, minf=2 00:19:07.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.296 issued rwts: total=1024,1219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.296 00:19:07.297 Run status group 0 (all jobs): 00:19:07.297 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:19:07.297 WRITE: bw=4871KiB/s (4988kB/s), 4871KiB/s-4871KiB/s (4988kB/s-4988kB/s), io=4876KiB (4993kB), run=1001-1001msec 00:19:07.297 00:19:07.297 Disk stats (read/write): 00:19:07.297 nvme0n1: ios=1010/1024, merge=0/0, ticks=653/330, in_queue=983, util=96.89% 00:19:07.297 17:11:23 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:07.297 17:11:23 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.297 17:11:23 -- common/autotest_common.sh@1198 -- # local i=0 00:19:07.554 17:11:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:07.554 17:11:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.554 17:11:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:07.554 17:11:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.554 17:11:23 -- common/autotest_common.sh@1210 -- # return 0 00:19:07.554 17:11:23 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:07.554 17:11:23 -- target/nmic.sh@53 -- # nvmftestfini 00:19:07.554 17:11:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.554 17:11:23 -- nvmf/common.sh@116 -- # sync 00:19:07.554 17:11:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.554 17:11:23 -- nvmf/common.sh@119 -- # set +e 00:19:07.554 17:11:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.554 17:11:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.554 rmmod nvme_tcp 00:19:07.554 rmmod nvme_fabrics 00:19:07.554 rmmod nvme_keyring 00:19:07.554 17:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.554 17:11:23 -- nvmf/common.sh@123 -- # set -e 00:19:07.554 17:11:23 -- nvmf/common.sh@124 -- # return 0 00:19:07.554 17:11:23 -- nvmf/common.sh@477 -- # '[' -n 551186 ']' 00:19:07.554 17:11:23 -- nvmf/common.sh@478 -- # killprocess 551186 00:19:07.554 17:11:23 -- common/autotest_common.sh@926 -- # '[' -z 551186 ']' 00:19:07.554 17:11:23 -- common/autotest_common.sh@930 -- # kill -0 551186 00:19:07.554 17:11:23 -- common/autotest_common.sh@931 -- # uname 00:19:07.554 17:11:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.554 17:11:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 551186 00:19:07.554 17:11:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.554 17:11:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.554 17:11:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 551186' 00:19:07.554 killing process with pid 551186 00:19:07.554 17:11:23 -- common/autotest_common.sh@945 -- # kill 551186 00:19:07.554 17:11:23 -- common/autotest_common.sh@950 -- # wait 551186 00:19:07.812 17:11:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.812 17:11:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.812 17:11:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.812 17:11:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.812 17:11:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.812 17:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.812 17:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.812 17:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.727 17:11:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.727 00:19:09.727 real 0m10.395s 00:19:09.727 user 0m24.909s 00:19:09.727 sys 0m2.238s 00:19:09.727 17:11:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.727 17:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.727 ************************************ 00:19:09.727 END TEST nvmf_nmic 00:19:09.727 ************************************ 00:19:09.986 17:11:25 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.986 17:11:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.986 17:11:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.986 17:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:09.986 ************************************ 00:19:09.986 START TEST nvmf_fio_target 00:19:09.986 ************************************ 00:19:09.986 17:11:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.986 * Looking for test storage... 00:19:09.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.986 17:11:25 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.986 17:11:25 -- nvmf/common.sh@7 -- # uname -s 00:19:09.986 17:11:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.986 17:11:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.986 17:11:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.986 17:11:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.986 17:11:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.986 17:11:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.986 17:11:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.986 17:11:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.986 17:11:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.986 17:11:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.986 17:11:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.986 17:11:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.986 17:11:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.986 17:11:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.986 17:11:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.986 17:11:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.986 17:11:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.986 17:11:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.986 17:11:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.986 17:11:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.986 17:11:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.986 17:11:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.986 17:11:25 -- paths/export.sh@5 -- # export PATH 00:19:09.986 17:11:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.986 17:11:25 -- nvmf/common.sh@46 -- # : 0 00:19:09.986 17:11:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.986 17:11:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.986 17:11:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.986 17:11:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.986 17:11:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.986 17:11:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.986 17:11:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.986 17:11:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.986 17:11:25 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.986 17:11:25 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.986 17:11:25 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.986 17:11:25 -- target/fio.sh@16 -- # nvmftestinit 00:19:09.986 17:11:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.986 17:11:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.986 17:11:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.986 17:11:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.986 17:11:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.986 17:11:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.986 17:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.986 17:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.986 17:11:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:09.986 17:11:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:09.986 17:11:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:09.986 17:11:25 -- common/autotest_common.sh@10 -- # set +x 00:19:11.950 17:11:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:11.950 17:11:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:11.950 17:11:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:11.950 17:11:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:11.950 17:11:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:11.950 17:11:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:11.950 17:11:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:11.950 17:11:27 -- nvmf/common.sh@294 -- # net_devs=() 00:19:11.950 17:11:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:11.950 17:11:27 -- nvmf/common.sh@295 -- # e810=() 00:19:11.950 17:11:27 -- nvmf/common.sh@295 -- # local -ga e810 00:19:11.950 17:11:27 -- nvmf/common.sh@296 -- # x722=() 00:19:11.950 17:11:27 -- nvmf/common.sh@296 -- # local -ga x722 00:19:11.950 17:11:27 -- nvmf/common.sh@297 -- # mlx=() 00:19:11.950 17:11:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:11.950 17:11:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.950 17:11:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:11.950 17:11:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:11.950 17:11:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:11.950 17:11:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.950 17:11:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:11.950 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:11.950 17:11:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.950 17:11:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:11.950 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:11.950 17:11:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.950 17:11:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:11.950 17:11:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:11.951 17:11:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:11.951 17:11:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.951 17:11:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.951 17:11:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.951 17:11:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.951 17:11:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:11.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:11.951 17:11:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.951 17:11:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.951 17:11:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.951 17:11:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.951 17:11:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.951 17:11:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:11.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:11.951 17:11:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.951 17:11:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:11.951 17:11:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:11.951 17:11:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:11.951 17:11:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:11.951 17:11:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:11.951 17:11:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.951 17:11:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.951 17:11:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.951 17:11:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:11.951 17:11:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.951 17:11:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.951 17:11:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:11.951 17:11:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.951 17:11:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.951 17:11:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:11.951 17:11:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:11.951 17:11:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.951 17:11:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.951 17:11:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.951 17:11:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.951 17:11:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:11.951 17:11:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.951 17:11:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.951 17:11:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.951 17:11:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:11.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:19:11.951 00:19:11.951 --- 10.0.0.2 ping statistics --- 00:19:11.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.951 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:11.951 17:11:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:19:11.951 00:19:11.951 --- 10.0.0.1 ping statistics --- 00:19:11.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.951 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:11.951 17:11:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.951 17:11:28 -- nvmf/common.sh@410 -- # return 0 00:19:11.951 17:11:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:11.951 17:11:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.951 17:11:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:11.951 17:11:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:11.951 17:11:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.951 17:11:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:11.951 17:11:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:11.951 17:11:28 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:11.951 17:11:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:11.951 17:11:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:11.951 17:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:11.951 17:11:28 -- nvmf/common.sh@469 -- # nvmfpid=554005 00:19:11.951 17:11:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.951 17:11:28 -- nvmf/common.sh@470 -- # waitforlisten 554005 00:19:11.951 17:11:28 -- common/autotest_common.sh@819 -- # '[' -z 554005 ']' 00:19:11.951 17:11:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.951 17:11:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.951 17:11:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.951 17:11:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.951 17:11:28 -- common/autotest_common.sh@10 -- # set +x 00:19:12.209 [2024-07-20 17:11:28.115258] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:12.210 [2024-07-20 17:11:28.115343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.210 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.210 [2024-07-20 17:11:28.187150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.210 [2024-07-20 17:11:28.278771] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:12.210 [2024-07-20 17:11:28.278949] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.210 [2024-07-20 17:11:28.278971] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.210 [2024-07-20 17:11:28.278986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.210 [2024-07-20 17:11:28.279071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.210 [2024-07-20 17:11:28.279123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.210 [2024-07-20 17:11:28.279175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.210 [2024-07-20 17:11:28.279178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.141 17:11:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:13.141 17:11:29 -- common/autotest_common.sh@852 -- # return 0 00:19:13.141 17:11:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.141 17:11:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:13.141 17:11:29 -- common/autotest_common.sh@10 -- # set +x 00:19:13.141 17:11:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.141 17:11:29 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:13.398 [2024-07-20 17:11:29.326182] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.398 17:11:29 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.656 17:11:29 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:13.656 17:11:29 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.913 17:11:29 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:13.913 17:11:29 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.169 17:11:30 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:14.169 17:11:30 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.426 17:11:30 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:14.426 17:11:30 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:14.683 17:11:30 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.940 17:11:30 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:14.940 17:11:30 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.198 17:11:31 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:15.198 17:11:31 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.455 17:11:31 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:15.455 17:11:31 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:15.712 17:11:31 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:15.712 17:11:31 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.712 17:11:31 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:15.969 17:11:32 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.969 17:11:32 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:16.226 17:11:32 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.483 [2024-07-20 17:11:32.544631] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.483 17:11:32 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:16.741 17:11:32 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:17.004 17:11:33 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:17.567 17:11:33 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:17.567 17:11:33 -- common/autotest_common.sh@1177 -- # local i=0 00:19:17.567 17:11:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:17.567 17:11:33 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:17.567 17:11:33 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:17.567 17:11:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:19.459 17:11:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:19.459 17:11:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:19.459 17:11:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.716 17:11:35 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:19.716 17:11:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.716 17:11:35 -- common/autotest_common.sh@1187 -- # return 0 00:19:19.716 17:11:35 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:19.716 [global] 00:19:19.716 thread=1 00:19:19.716 invalidate=1 00:19:19.716 rw=write 00:19:19.716 time_based=1 00:19:19.716 runtime=1 00:19:19.716 ioengine=libaio 00:19:19.716 direct=1 00:19:19.716 bs=4096 00:19:19.716 iodepth=1 00:19:19.716 norandommap=0 00:19:19.716 numjobs=1 00:19:19.716 00:19:19.716 verify_dump=1 00:19:19.716 verify_backlog=512 00:19:19.716 verify_state_save=0 00:19:19.716 do_verify=1 00:19:19.716 verify=crc32c-intel 00:19:19.716 [job0] 00:19:19.716 filename=/dev/nvme0n1 00:19:19.716 [job1] 00:19:19.716 filename=/dev/nvme0n2 00:19:19.716 [job2] 00:19:19.716 filename=/dev/nvme0n3 00:19:19.716 [job3] 00:19:19.716 filename=/dev/nvme0n4 00:19:19.716 Could not set queue depth (nvme0n1) 00:19:19.716 Could not set queue depth (nvme0n2) 00:19:19.716 Could not set queue depth (nvme0n3) 00:19:19.716 Could not set queue depth (nvme0n4) 00:19:19.716 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.716 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.716 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.716 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.716 fio-3.35 00:19:19.716 Starting 4 threads 00:19:21.087 00:19:21.087 job0: (groupid=0, jobs=1): err= 0: pid=555042: Sat Jul 20 17:11:37 2024 00:19:21.087 read: IOPS=529, BW=2118KiB/s (2169kB/s)(2120KiB/1001msec) 00:19:21.087 slat (nsec): min=7084, max=74760, avg=26959.92, stdev=10173.86 00:19:21.087 clat (usec): min=651, max=1053, avg=858.47, stdev=49.91 00:19:21.087 lat (usec): min=667, max=1086, avg=885.43, stdev=55.95 00:19:21.087 clat percentiles (usec): 00:19:21.087 | 1.00th=[ 742], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 816], 00:19:21.087 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[ 865], 60.00th=[ 881], 00:19:21.088 | 70.00th=[ 889], 80.00th=[ 898], 90.00th=[ 906], 95.00th=[ 930], 00:19:21.088 | 99.00th=[ 988], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1057], 00:19:21.088 | 99.99th=[ 1057] 00:19:21.088 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:21.088 slat (nsec): min=8242, max=80886, avg=21515.69, stdev=10706.61 00:19:21.088 clat (usec): min=391, max=1497, avg=487.35, stdev=56.19 00:19:21.088 lat (usec): min=401, max=1506, avg=508.87, stdev=57.15 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[ 408], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 457], 00:19:21.088 | 30.00th=[ 465], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 494], 00:19:21.088 | 70.00th=[ 506], 80.00th=[ 515], 90.00th=[ 537], 95.00th=[ 553], 00:19:21.088 | 99.00th=[ 619], 99.50th=[ 685], 99.90th=[ 1057], 99.95th=[ 1500], 00:19:21.088 | 99.99th=[ 1500] 00:19:21.088 bw ( KiB/s): min= 4096, max= 4096, per=34.20%, avg=4096.00, stdev= 0.00, samples=1 00:19:21.088 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:21.088 lat (usec) : 500=43.50%, 750=22.59%, 1000=33.66% 00:19:21.088 lat (msec) : 2=0.26% 00:19:21.088 cpu : usr=2.90%, sys=4.40%, ctx=1555, majf=0, minf=1 00:19:21.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.088 job1: (groupid=0, jobs=1): err= 0: pid=555043: Sat Jul 20 17:11:37 2024 00:19:21.088 read: IOPS=18, BW=75.9KiB/s (77.7kB/s)(76.0KiB/1001msec) 00:19:21.088 slat (nsec): min=11998, max=34922, avg=21155.74, stdev=8913.05 00:19:21.088 clat (usec): min=40759, max=41073, avg=40961.76, stdev=65.46 00:19:21.088 lat (usec): min=40772, max=41089, avg=40982.92, stdev=66.15 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:21.088 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:21.088 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:21.088 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:21.088 | 99.99th=[41157] 00:19:21.088 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:19:21.088 slat (nsec): min=7745, max=77379, avg=25736.84, stdev=15066.32 00:19:21.088 clat (usec): min=308, max=1325, avg=399.88, stdev=82.20 00:19:21.088 lat (usec): min=317, max=1378, avg=425.61, stdev=87.08 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 347], 00:19:21.088 | 30.00th=[ 359], 40.00th=[ 375], 50.00th=[ 392], 60.00th=[ 408], 00:19:21.088 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 457], 95.00th=[ 478], 00:19:21.088 | 99.00th=[ 603], 99.50th=[ 1156], 99.90th=[ 1319], 99.95th=[ 1319], 00:19:21.088 | 99.99th=[ 1319] 00:19:21.088 bw ( KiB/s): min= 4087, max= 4087, per=34.12%, avg=4087.00, stdev= 0.00, samples=1 00:19:21.088 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:21.088 lat (usec) : 500=94.35%, 750=1.32%, 1000=0.19% 00:19:21.088 lat (msec) : 2=0.56%, 50=3.58% 00:19:21.088 cpu : usr=1.00%, sys=1.50%, ctx=532, majf=0, minf=1 00:19:21.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.088 job2: (groupid=0, jobs=1): err= 0: pid=555063: Sat Jul 20 17:11:37 2024 00:19:21.088 read: IOPS=17, BW=71.9KiB/s (73.6kB/s)(72.0KiB/1002msec) 00:19:21.088 slat (nsec): min=15650, max=35096, avg=24378.44, stdev=9526.70 00:19:21.088 clat (usec): min=19964, max=41984, avg=40109.76, stdev=5048.19 00:19:21.088 lat (usec): min=19980, max=42000, avg=40134.14, stdev=5050.29 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[20055], 5.00th=[20055], 10.00th=[41157], 20.00th=[41157], 00:19:21.088 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:21.088 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:21.088 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:21.088 | 99.99th=[42206] 00:19:21.088 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:19:21.088 slat (nsec): min=8179, max=72779, avg=26005.05, stdev=13592.03 00:19:21.088 clat (usec): min=379, max=1299, avg=511.99, stdev=132.46 00:19:21.088 lat (usec): min=398, max=1316, avg=537.99, stdev=132.23 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 445], 00:19:21.088 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 469], 60.00th=[ 482], 00:19:21.088 | 70.00th=[ 494], 80.00th=[ 515], 90.00th=[ 676], 95.00th=[ 840], 00:19:21.088 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1303], 99.95th=[ 1303], 00:19:21.088 | 99.99th=[ 1303] 00:19:21.088 bw ( KiB/s): min= 4087, max= 4087, per=34.12%, avg=4087.00, stdev= 0.00, samples=1 00:19:21.088 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:19:21.088 lat (usec) : 500=71.32%, 750=18.49%, 1000=5.47% 00:19:21.088 lat (msec) : 2=1.32%, 20=0.19%, 50=3.21% 00:19:21.088 cpu : usr=0.60%, sys=1.40%, ctx=533, majf=0, minf=1 00:19:21.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.088 job3: (groupid=0, jobs=1): err= 0: pid=555069: Sat Jul 20 17:11:37 2024 00:19:21.088 read: IOPS=504, BW=2019KiB/s (2068kB/s)(2072KiB/1026msec) 00:19:21.088 slat (nsec): min=6746, max=62846, avg=26758.95, stdev=10671.43 00:19:21.088 clat (usec): min=446, max=41386, avg=1154.98, stdev=4673.25 00:19:21.088 lat (usec): min=457, max=41402, avg=1181.74, stdev=4671.99 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[ 465], 5.00th=[ 494], 10.00th=[ 506], 20.00th=[ 529], 00:19:21.088 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 635], 00:19:21.088 | 70.00th=[ 652], 80.00th=[ 668], 90.00th=[ 685], 95.00th=[ 734], 00:19:21.088 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:21.088 | 99.99th=[41157] 00:19:21.088 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:19:21.088 slat (nsec): min=7947, max=80119, avg=21073.27, stdev=11919.60 00:19:21.088 clat (usec): min=290, max=3675, avg=372.30, stdev=123.66 00:19:21.088 lat (usec): min=300, max=3686, avg=393.37, stdev=125.05 00:19:21.088 clat percentiles (usec): 00:19:21.088 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 318], 00:19:21.088 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 359], 00:19:21.088 | 70.00th=[ 388], 80.00th=[ 412], 90.00th=[ 465], 95.00th=[ 502], 00:19:21.088 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 1270], 99.95th=[ 3687], 00:19:21.088 | 99.99th=[ 3687] 00:19:21.088 bw ( KiB/s): min= 4087, max= 4096, per=34.16%, avg=4091.50, stdev= 6.36, samples=2 00:19:21.088 iops : min= 1021, max= 1024, avg=1022.50, stdev= 2.12, samples=2 00:19:21.088 lat (usec) : 500=65.82%, 750=32.62%, 1000=0.78% 00:19:21.088 lat (msec) : 2=0.19%, 4=0.13%, 50=0.45% 00:19:21.088 cpu : usr=2.24%, sys=4.20%, ctx=1544, majf=0, minf=2 00:19:21.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.088 issued rwts: total=518,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.088 00:19:21.088 Run status group 0 (all jobs): 00:19:21.088 READ: bw=4230KiB/s (4332kB/s), 71.9KiB/s-2118KiB/s (73.6kB/s-2169kB/s), io=4340KiB (4444kB), run=1001-1026msec 00:19:21.088 WRITE: bw=11.7MiB/s (12.3MB/s), 2044KiB/s-4092KiB/s (2093kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1026msec 00:19:21.088 00:19:21.088 Disk stats (read/write): 00:19:21.088 nvme0n1: ios=561/757, merge=0/0, ticks=730/364, in_queue=1094, util=85.27% 00:19:21.088 nvme0n2: ios=57/512, merge=0/0, ticks=716/164, in_queue=880, util=91.25% 00:19:21.088 nvme0n3: ios=70/512, merge=0/0, ticks=823/239, in_queue=1062, util=93.40% 00:19:21.088 nvme0n4: ios=570/1024, merge=0/0, ticks=661/364, in_queue=1025, util=94.29% 00:19:21.088 17:11:37 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:21.088 [global] 00:19:21.088 thread=1 00:19:21.088 invalidate=1 00:19:21.088 rw=randwrite 00:19:21.088 time_based=1 00:19:21.088 runtime=1 00:19:21.088 ioengine=libaio 00:19:21.088 direct=1 00:19:21.088 bs=4096 00:19:21.088 iodepth=1 00:19:21.088 norandommap=0 00:19:21.088 numjobs=1 00:19:21.088 00:19:21.088 verify_dump=1 00:19:21.088 verify_backlog=512 00:19:21.088 verify_state_save=0 00:19:21.088 do_verify=1 00:19:21.088 verify=crc32c-intel 00:19:21.088 [job0] 00:19:21.088 filename=/dev/nvme0n1 00:19:21.088 [job1] 00:19:21.088 filename=/dev/nvme0n2 00:19:21.088 [job2] 00:19:21.088 filename=/dev/nvme0n3 00:19:21.088 [job3] 00:19:21.088 filename=/dev/nvme0n4 00:19:21.088 Could not set queue depth (nvme0n1) 00:19:21.088 Could not set queue depth (nvme0n2) 00:19:21.088 Could not set queue depth (nvme0n3) 00:19:21.088 Could not set queue depth (nvme0n4) 00:19:21.346 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.346 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.346 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.346 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.346 fio-3.35 00:19:21.346 Starting 4 threads 00:19:22.719 00:19:22.719 job0: (groupid=0, jobs=1): err= 0: pid=555397: Sat Jul 20 17:11:38 2024 00:19:22.719 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:19:22.719 slat (nsec): min=15719, max=42715, avg=26538.16, stdev=10374.71 00:19:22.719 clat (usec): min=36510, max=41198, avg=40740.95, stdev=1027.22 00:19:22.719 lat (usec): min=36526, max=41216, avg=40767.49, stdev=1029.53 00:19:22.719 clat percentiles (usec): 00:19:22.719 | 1.00th=[36439], 5.00th=[36439], 10.00th=[40633], 20.00th=[41157], 00:19:22.719 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:22.719 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:22.719 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:22.719 | 99.99th=[41157] 00:19:22.719 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:19:22.719 slat (nsec): min=10295, max=92359, avg=33538.05, stdev=14700.15 00:19:22.719 clat (usec): min=311, max=1067, avg=410.23, stdev=67.40 00:19:22.719 lat (usec): min=323, max=1091, avg=443.77, stdev=71.22 00:19:22.719 clat percentiles (usec): 00:19:22.719 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 355], 00:19:22.719 | 30.00th=[ 371], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 420], 00:19:22.719 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 506], 00:19:22.719 | 99.00th=[ 586], 99.50th=[ 676], 99.90th=[ 1074], 99.95th=[ 1074], 00:19:22.719 | 99.99th=[ 1074] 00:19:22.719 bw ( KiB/s): min= 4096, max= 4096, per=29.12%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.719 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.720 lat (usec) : 500=90.40%, 750=5.65% 00:19:22.720 lat (msec) : 2=0.38%, 50=3.58% 00:19:22.720 cpu : usr=1.09%, sys=2.09%, ctx=532, majf=0, minf=1 00:19:22.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.720 job1: (groupid=0, jobs=1): err= 0: pid=555398: Sat Jul 20 17:11:38 2024 00:19:22.720 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:19:22.720 slat (nsec): min=8495, max=61567, avg=25002.28, stdev=9056.65 00:19:22.720 clat (usec): min=576, max=41112, avg=827.49, stdev=1858.90 00:19:22.720 lat (usec): min=592, max=41126, avg=852.49, stdev=1858.17 00:19:22.720 clat percentiles (usec): 00:19:22.720 | 1.00th=[ 586], 5.00th=[ 603], 10.00th=[ 611], 20.00th=[ 627], 00:19:22.720 | 30.00th=[ 644], 40.00th=[ 652], 50.00th=[ 660], 60.00th=[ 676], 00:19:22.720 | 70.00th=[ 758], 80.00th=[ 840], 90.00th=[ 996], 95.00th=[ 1012], 00:19:22.720 | 99.00th=[ 1037], 99.50th=[ 1037], 99.90th=[41157], 99.95th=[41157], 00:19:22.720 | 99.99th=[41157] 00:19:22.720 write: IOPS=980, BW=3920KiB/s (4014kB/s)(3924KiB/1001msec); 0 zone resets 00:19:22.720 slat (nsec): min=7537, max=71708, avg=24127.98, stdev=9931.14 00:19:22.720 clat (usec): min=303, max=948, avg=540.12, stdev=153.38 00:19:22.720 lat (usec): min=316, max=964, avg=564.25, stdev=151.04 00:19:22.720 clat percentiles (usec): 00:19:22.720 | 1.00th=[ 318], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 404], 00:19:22.720 | 30.00th=[ 441], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 586], 00:19:22.720 | 70.00th=[ 644], 80.00th=[ 668], 90.00th=[ 775], 95.00th=[ 832], 00:19:22.720 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 947], 99.95th=[ 947], 00:19:22.720 | 99.99th=[ 947] 00:19:22.720 bw ( KiB/s): min= 4096, max= 4096, per=29.12%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.720 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.720 lat (usec) : 500=37.78%, 750=43.60%, 1000=15.61% 00:19:22.720 lat (msec) : 2=2.88%, 20=0.07%, 50=0.07% 00:19:22.720 cpu : usr=1.60%, sys=4.10%, ctx=1493, majf=0, minf=2 00:19:22.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 issued rwts: total=512,981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.720 job2: (groupid=0, jobs=1): err= 0: pid=555399: Sat Jul 20 17:11:38 2024 00:19:22.720 read: IOPS=885, BW=3540KiB/s (3625kB/s)(3544KiB/1001msec) 00:19:22.720 slat (nsec): min=7607, max=64251, avg=19751.80, stdev=8827.22 00:19:22.720 clat (usec): min=440, max=4101, avg=574.40, stdev=207.07 00:19:22.720 lat (usec): min=452, max=4130, avg=594.15, stdev=206.15 00:19:22.720 clat percentiles (usec): 00:19:22.720 | 1.00th=[ 449], 5.00th=[ 457], 10.00th=[ 461], 20.00th=[ 469], 00:19:22.720 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 515], 00:19:22.720 | 70.00th=[ 537], 80.00th=[ 742], 90.00th=[ 832], 95.00th=[ 906], 00:19:22.720 | 99.00th=[ 988], 99.50th=[ 1106], 99.90th=[ 4113], 99.95th=[ 4113], 00:19:22.720 | 99.99th=[ 4113] 00:19:22.720 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:22.720 slat (usec): min=11, max=23750, avg=55.17, stdev=741.30 00:19:22.720 clat (usec): min=305, max=557, avg=394.81, stdev=43.95 00:19:22.720 lat (usec): min=317, max=24233, avg=449.98, stdev=745.59 00:19:22.720 clat percentiles (usec): 00:19:22.720 | 1.00th=[ 318], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:19:22.720 | 30.00th=[ 363], 40.00th=[ 379], 50.00th=[ 400], 60.00th=[ 412], 00:19:22.720 | 70.00th=[ 420], 80.00th=[ 429], 90.00th=[ 449], 95.00th=[ 469], 00:19:22.720 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 553], 00:19:22.720 | 99.99th=[ 553] 00:19:22.720 bw ( KiB/s): min= 4096, max= 4096, per=29.12%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.720 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.720 lat (usec) : 500=76.28%, 750=15.13%, 1000=8.22% 00:19:22.720 lat (msec) : 2=0.26%, 4=0.05%, 10=0.05% 00:19:22.720 cpu : usr=3.50%, sys=6.60%, ctx=1912, majf=0, minf=1 00:19:22.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 issued rwts: total=886,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.720 job3: (groupid=0, jobs=1): err= 0: pid=555400: Sat Jul 20 17:11:38 2024 00:19:22.720 read: IOPS=535, BW=2142KiB/s (2193kB/s)(2144KiB/1001msec) 00:19:22.720 slat (nsec): min=11106, max=69202, avg=27579.26, stdev=9597.89 00:19:22.720 clat (usec): min=570, max=3622, avg=771.73, stdev=152.30 00:19:22.720 lat (usec): min=593, max=3640, avg=799.31, stdev=153.39 00:19:22.720 clat percentiles (usec): 00:19:22.720 | 1.00th=[ 594], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 644], 00:19:22.720 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 807], 00:19:22.720 | 70.00th=[ 816], 80.00th=[ 832], 90.00th=[ 848], 95.00th=[ 881], 00:19:22.720 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 3621], 99.95th=[ 3621], 00:19:22.720 | 99.99th=[ 3621] 00:19:22.720 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:22.720 slat (nsec): min=8717, max=76693, avg=24736.86, stdev=9376.27 00:19:22.720 clat (usec): min=384, max=1075, avg=523.58, stdev=107.63 00:19:22.720 lat (usec): min=398, max=1102, avg=548.32, stdev=110.17 00:19:22.720 clat percentiles (usec): 00:19:22.720 | 1.00th=[ 396], 5.00th=[ 424], 10.00th=[ 429], 20.00th=[ 441], 00:19:22.720 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:19:22.720 | 70.00th=[ 586], 80.00th=[ 644], 90.00th=[ 676], 95.00th=[ 709], 00:19:22.720 | 99.00th=[ 840], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 1074], 00:19:22.720 | 99.99th=[ 1074] 00:19:22.720 bw ( KiB/s): min= 4096, max= 4096, per=29.12%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.720 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.720 lat (usec) : 500=41.92%, 750=31.92%, 1000=25.96% 00:19:22.720 lat (msec) : 2=0.13%, 4=0.06% 00:19:22.720 cpu : usr=2.00%, sys=4.20%, ctx=1562, majf=0, minf=1 00:19:22.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.720 issued rwts: total=536,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.720 00:19:22.720 Run status group 0 (all jobs): 00:19:22.720 READ: bw=7758KiB/s (7944kB/s), 75.5KiB/s-3540KiB/s (77.3kB/s-3625kB/s), io=7812KiB (7999kB), run=1001-1007msec 00:19:22.720 WRITE: bw=13.7MiB/s (14.4MB/s), 2034KiB/s-4092KiB/s (2083kB/s-4190kB/s), io=13.8MiB (14.5MB), run=1001-1007msec 00:19:22.720 00:19:22.720 Disk stats (read/write): 00:19:22.720 nvme0n1: ios=39/512, merge=0/0, ticks=1519/155, in_queue=1674, util=90.08% 00:19:22.720 nvme0n2: ios=562/779, merge=0/0, ticks=526/370, in_queue=896, util=92.99% 00:19:22.720 nvme0n3: ios=771/1024, merge=0/0, ticks=925/356, in_queue=1281, util=97.92% 00:19:22.720 nvme0n4: ios=560/766, merge=0/0, ticks=1127/406, in_queue=1533, util=99.90% 00:19:22.720 17:11:38 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:22.720 [global] 00:19:22.720 thread=1 00:19:22.720 invalidate=1 00:19:22.720 rw=write 00:19:22.720 time_based=1 00:19:22.720 runtime=1 00:19:22.721 ioengine=libaio 00:19:22.721 direct=1 00:19:22.721 bs=4096 00:19:22.721 iodepth=128 00:19:22.721 norandommap=0 00:19:22.721 numjobs=1 00:19:22.721 00:19:22.721 verify_dump=1 00:19:22.721 verify_backlog=512 00:19:22.721 verify_state_save=0 00:19:22.721 do_verify=1 00:19:22.721 verify=crc32c-intel 00:19:22.721 [job0] 00:19:22.721 filename=/dev/nvme0n1 00:19:22.721 [job1] 00:19:22.721 filename=/dev/nvme0n2 00:19:22.721 [job2] 00:19:22.721 filename=/dev/nvme0n3 00:19:22.721 [job3] 00:19:22.721 filename=/dev/nvme0n4 00:19:22.721 Could not set queue depth (nvme0n1) 00:19:22.721 Could not set queue depth (nvme0n2) 00:19:22.721 Could not set queue depth (nvme0n3) 00:19:22.721 Could not set queue depth (nvme0n4) 00:19:22.721 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.721 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.721 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.721 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.721 fio-3.35 00:19:22.721 Starting 4 threads 00:19:24.107 00:19:24.108 job0: (groupid=0, jobs=1): err= 0: pid=555627: Sat Jul 20 17:11:39 2024 00:19:24.108 read: IOPS=3349, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1003msec) 00:19:24.108 slat (usec): min=3, max=7026, avg=87.30, stdev=404.60 00:19:24.108 clat (usec): min=893, max=23315, avg=9986.84, stdev=3172.73 00:19:24.108 lat (usec): min=2628, max=23336, avg=10074.14, stdev=3202.75 00:19:24.108 clat percentiles (usec): 00:19:24.108 | 1.00th=[ 5669], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 7832], 00:19:24.108 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:19:24.108 | 70.00th=[10421], 80.00th=[11731], 90.00th=[14353], 95.00th=[17433], 00:19:24.108 | 99.00th=[20579], 99.50th=[21365], 99.90th=[23200], 99.95th=[23200], 00:19:24.108 | 99.99th=[23200] 00:19:24.108 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:19:24.108 slat (usec): min=4, max=7329, avg=187.38, stdev=569.69 00:19:24.108 clat (usec): min=6625, max=40165, avg=26015.37, stdev=7157.86 00:19:24.108 lat (usec): min=6644, max=40201, avg=26202.75, stdev=7213.99 00:19:24.108 clat percentiles (usec): 00:19:24.108 | 1.00th=[ 8225], 5.00th=[12387], 10.00th=[14877], 20.00th=[19530], 00:19:24.108 | 30.00th=[22676], 40.00th=[25560], 50.00th=[28181], 60.00th=[30278], 00:19:24.108 | 70.00th=[31851], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:19:24.108 | 99.00th=[34866], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:19:24.108 | 99.99th=[40109] 00:19:24.108 bw ( KiB/s): min=14144, max=14528, per=36.08%, avg=14336.00, stdev=271.53, samples=2 00:19:24.108 iops : min= 3536, max= 3632, avg=3584.00, stdev=67.88, samples=2 00:19:24.108 lat (usec) : 1000=0.01% 00:19:24.108 lat (msec) : 4=0.13%, 10=33.54%, 20=25.24%, 50=41.07% 00:19:24.108 cpu : usr=3.59%, sys=8.48%, ctx=581, majf=0, minf=1 00:19:24.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:24.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.108 issued rwts: total=3360,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.108 job1: (groupid=0, jobs=1): err= 0: pid=555628: Sat Jul 20 17:11:39 2024 00:19:24.108 read: IOPS=2067, BW=8269KiB/s (8468kB/s)(8352KiB/1010msec) 00:19:24.108 slat (usec): min=3, max=120873, avg=200.89, stdev=2941.83 00:19:24.108 clat (msec): min=5, max=149, avg=23.46, stdev=31.61 00:19:24.108 lat (msec): min=5, max=149, avg=23.66, stdev=31.73 00:19:24.108 clat percentiles (msec): 00:19:24.108 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:19:24.108 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 11], 60.00th=[ 14], 00:19:24.108 | 70.00th=[ 20], 80.00th=[ 26], 90.00th=[ 57], 95.00th=[ 134], 00:19:24.108 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 150], 00:19:24.108 | 99.99th=[ 150] 00:19:24.108 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:19:24.108 slat (usec): min=3, max=62875, avg=211.83, stdev=1961.78 00:19:24.108 clat (usec): min=751, max=122057, avg=30918.51, stdev=27531.05 00:19:24.108 lat (usec): min=760, max=122072, avg=31130.33, stdev=27639.62 00:19:24.108 clat percentiles (msec): 00:19:24.108 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 15], 00:19:24.108 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 27], 60.00th=[ 29], 00:19:24.108 | 70.00th=[ 32], 80.00th=[ 33], 90.00th=[ 58], 95.00th=[ 110], 00:19:24.108 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 123], 99.95th=[ 123], 00:19:24.108 | 99.99th=[ 123] 00:19:24.108 bw ( KiB/s): min= 5408, max=14376, per=24.90%, avg=9892.00, stdev=6341.33, samples=2 00:19:24.108 iops : min= 1352, max= 3594, avg=2473.00, stdev=1585.33, samples=2 00:19:24.108 lat (usec) : 1000=0.06% 00:19:24.108 lat (msec) : 2=0.19%, 4=0.84%, 10=28.87%, 20=21.99%, 50=36.79% 00:19:24.108 lat (msec) : 100=3.40%, 250=7.85% 00:19:24.108 cpu : usr=2.68%, sys=5.15%, ctx=398, majf=0, minf=1 00:19:24.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:24.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.108 issued rwts: total=2088,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.108 job2: (groupid=0, jobs=1): err= 0: pid=555629: Sat Jul 20 17:11:39 2024 00:19:24.108 read: IOPS=1531, BW=6126KiB/s (6273kB/s)(6144KiB/1003msec) 00:19:24.108 slat (usec): min=3, max=155721, avg=391.91, stdev=6371.10 00:19:24.108 clat (msec): min=6, max=447, avg=49.74, stdev=88.34 00:19:24.108 lat (msec): min=6, max=447, avg=50.13, stdev=88.91 00:19:24.108 clat percentiles (msec): 00:19:24.108 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:19:24.108 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:19:24.108 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 157], 95.00th=[ 300], 00:19:24.108 | 99.00th=[ 309], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 447], 00:19:24.108 | 99.99th=[ 447] 00:19:24.108 write: IOPS=1835, BW=7342KiB/s (7518kB/s)(7364KiB/1003msec); 0 zone resets 00:19:24.108 slat (usec): min=5, max=47128, avg=207.47, stdev=1563.82 00:19:24.108 clat (usec): min=606, max=64512, avg=27396.47, stdev=15063.96 00:19:24.108 lat (usec): min=5611, max=64523, avg=27603.94, stdev=15109.16 00:19:24.108 clat percentiles (usec): 00:19:24.108 | 1.00th=[ 5866], 5.00th=[10683], 10.00th=[11731], 20.00th=[14484], 00:19:24.108 | 30.00th=[16581], 40.00th=[19530], 50.00th=[26084], 60.00th=[31327], 00:19:24.108 | 70.00th=[32113], 80.00th=[33162], 90.00th=[57934], 95.00th=[60031], 00:19:24.108 | 99.00th=[63177], 99.50th=[63701], 99.90th=[64750], 99.95th=[64750], 00:19:24.108 | 99.99th=[64750] 00:19:24.108 bw ( KiB/s): min= 4096, max= 9608, per=17.24%, avg=6852.00, stdev=3897.57, samples=2 00:19:24.108 iops : min= 1024, max= 2402, avg=1713.00, stdev=974.39, samples=2 00:19:24.108 lat (usec) : 750=0.03% 00:19:24.108 lat (msec) : 10=4.26%, 20=53.18%, 50=27.81%, 100=7.52%, 250=3.44% 00:19:24.108 lat (msec) : 500=3.76% 00:19:24.108 cpu : usr=2.00%, sys=3.89%, ctx=233, majf=0, minf=1 00:19:24.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:19:24.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.108 issued rwts: total=1536,1841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.108 job3: (groupid=0, jobs=1): err= 0: pid=555630: Sat Jul 20 17:11:39 2024 00:19:24.108 read: IOPS=1725, BW=6902KiB/s (7068kB/s)(6916KiB/1002msec) 00:19:24.108 slat (usec): min=4, max=137544, avg=326.69, stdev=4788.47 00:19:24.108 clat (usec): min=789, max=319778, avg=36640.19, stdev=66077.62 00:19:24.108 lat (usec): min=798, max=319802, avg=36966.88, stdev=66571.80 00:19:24.108 clat percentiles (msec): 00:19:24.108 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:19:24.108 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:19:24.108 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 153], 95.00th=[ 215], 00:19:24.108 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 284], 99.95th=[ 321], 00:19:24.108 | 99.99th=[ 321] 00:19:24.108 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:19:24.108 slat (usec): min=5, max=67305, avg=192.19, stdev=1667.89 00:19:24.108 clat (msec): min=5, max=284, avg=25.10, stdev=21.70 00:19:24.108 lat (msec): min=5, max=284, avg=25.29, stdev=21.76 00:19:24.108 clat percentiles (msec): 00:19:24.108 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 16], 00:19:24.108 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 25], 00:19:24.108 | 70.00th=[ 26], 80.00th=[ 28], 90.00th=[ 33], 95.00th=[ 37], 00:19:24.108 | 99.00th=[ 106], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 215], 00:19:24.108 | 99.99th=[ 284] 00:19:24.108 bw ( KiB/s): min= 4096, max=12288, per=20.62%, avg=8192.00, stdev=5792.62, samples=2 00:19:24.108 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:19:24.108 lat (usec) : 1000=0.03% 00:19:24.108 lat (msec) : 2=0.21%, 4=1.06%, 10=11.49%, 20=45.88%, 50=33.02% 00:19:24.108 lat (msec) : 100=1.27%, 250=6.14%, 500=0.90% 00:19:24.108 cpu : usr=2.70%, sys=4.20%, ctx=323, majf=0, minf=1 00:19:24.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:19:24.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.108 issued rwts: total=1729,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.108 00:19:24.108 Run status group 0 (all jobs): 00:19:24.108 READ: bw=33.7MiB/s (35.3MB/s), 6126KiB/s-13.1MiB/s (6273kB/s-13.7MB/s), io=34.0MiB (35.7MB), run=1002-1010msec 00:19:24.108 WRITE: bw=38.8MiB/s (40.7MB/s), 7342KiB/s-14.0MiB/s (7518kB/s-14.6MB/s), io=39.2MiB (41.1MB), run=1002-1010msec 00:19:24.108 00:19:24.108 Disk stats (read/write): 00:19:24.108 nvme0n1: ios=2942/3072, merge=0/0, ticks=14619/36422, in_queue=51041, util=87.98% 00:19:24.108 nvme0n2: ios=2098/2151, merge=0/0, ticks=47519/46539, in_queue=94058, util=92.99% 00:19:24.108 nvme0n3: ios=1047/1424, merge=0/0, ticks=35727/17421, in_queue=53148, util=97.71% 00:19:24.108 nvme0n4: ios=1041/1530, merge=0/0, ticks=51139/24266, in_queue=75405, util=100.00% 00:19:24.108 17:11:39 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:24.108 [global] 00:19:24.108 thread=1 00:19:24.108 invalidate=1 00:19:24.108 rw=randwrite 00:19:24.108 time_based=1 00:19:24.108 runtime=1 00:19:24.108 ioengine=libaio 00:19:24.108 direct=1 00:19:24.108 bs=4096 00:19:24.108 iodepth=128 00:19:24.108 norandommap=0 00:19:24.108 numjobs=1 00:19:24.108 00:19:24.108 verify_dump=1 00:19:24.108 verify_backlog=512 00:19:24.108 verify_state_save=0 00:19:24.108 do_verify=1 00:19:24.108 verify=crc32c-intel 00:19:24.108 [job0] 00:19:24.108 filename=/dev/nvme0n1 00:19:24.108 [job1] 00:19:24.108 filename=/dev/nvme0n2 00:19:24.108 [job2] 00:19:24.108 filename=/dev/nvme0n3 00:19:24.108 [job3] 00:19:24.108 filename=/dev/nvme0n4 00:19:24.108 Could not set queue depth (nvme0n1) 00:19:24.108 Could not set queue depth (nvme0n2) 00:19:24.108 Could not set queue depth (nvme0n3) 00:19:24.108 Could not set queue depth (nvme0n4) 00:19:24.108 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.108 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.108 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.108 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.108 fio-3.35 00:19:24.108 Starting 4 threads 00:19:25.483 00:19:25.483 job0: (groupid=0, jobs=1): err= 0: pid=555868: Sat Jul 20 17:11:41 2024 00:19:25.483 read: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec) 00:19:25.483 slat (usec): min=2, max=27169, avg=125.38, stdev=864.31 00:19:25.483 clat (usec): min=5246, max=72471, avg=15928.74, stdev=10146.97 00:19:25.483 lat (usec): min=6397, max=72477, avg=16054.12, stdev=10215.46 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 6521], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[10290], 00:19:25.483 | 30.00th=[10683], 40.00th=[11207], 50.00th=[12649], 60.00th=[14746], 00:19:25.483 | 70.00th=[15795], 80.00th=[18220], 90.00th=[26346], 95.00th=[36963], 00:19:25.483 | 99.00th=[62653], 99.50th=[69731], 99.90th=[70779], 99.95th=[72877], 00:19:25.483 | 99.99th=[72877] 00:19:25.483 write: IOPS=3375, BW=13.2MiB/s (13.8MB/s)(13.8MiB/1043msec); 0 zone resets 00:19:25.483 slat (usec): min=3, max=9550, avg=148.54, stdev=638.13 00:19:25.483 clat (usec): min=2630, max=81202, avg=23673.88, stdev=12661.49 00:19:25.483 lat (usec): min=2652, max=81220, avg=23822.42, stdev=12727.07 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 5538], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[12518], 00:19:25.483 | 30.00th=[15401], 40.00th=[18744], 50.00th=[23200], 60.00th=[25822], 00:19:25.483 | 70.00th=[28967], 80.00th=[32375], 90.00th=[37487], 95.00th=[43779], 00:19:25.483 | 99.00th=[73925], 99.50th=[76022], 99.90th=[81265], 99.95th=[81265], 00:19:25.483 | 99.99th=[81265] 00:19:25.483 bw ( KiB/s): min=13296, max=13848, per=26.03%, avg=13572.00, stdev=390.32, samples=2 00:19:25.483 iops : min= 3324, max= 3462, avg=3393.00, stdev=97.58, samples=2 00:19:25.483 lat (msec) : 4=0.21%, 10=15.26%, 20=47.14%, 50=34.67%, 100=2.72% 00:19:25.483 cpu : usr=3.36%, sys=6.91%, ctx=568, majf=0, minf=1 00:19:25.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.483 issued rwts: total=3072,3521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.483 job1: (groupid=0, jobs=1): err= 0: pid=555869: Sat Jul 20 17:11:41 2024 00:19:25.483 read: IOPS=3342, BW=13.1MiB/s (13.7MB/s)(13.3MiB/1020msec) 00:19:25.483 slat (usec): min=3, max=20822, avg=137.81, stdev=1003.59 00:19:25.483 clat (usec): min=3844, max=53932, avg=18146.13, stdev=8754.40 00:19:25.483 lat (usec): min=3856, max=53941, avg=18283.94, stdev=8787.95 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[11469], 00:19:25.483 | 30.00th=[13566], 40.00th=[14746], 50.00th=[16057], 60.00th=[16909], 00:19:25.483 | 70.00th=[18482], 80.00th=[21890], 90.00th=[32900], 95.00th=[38536], 00:19:25.483 | 99.00th=[44827], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:19:25.483 | 99.99th=[53740] 00:19:25.483 write: IOPS=3513, BW=13.7MiB/s (14.4MB/s)(14.0MiB/1020msec); 0 zone resets 00:19:25.483 slat (usec): min=4, max=28596, avg=138.38, stdev=967.57 00:19:25.483 clat (usec): min=3451, max=56831, avg=18788.35, stdev=10895.26 00:19:25.483 lat (usec): min=4469, max=56838, avg=18926.74, stdev=10958.74 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 5014], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[ 9765], 00:19:25.483 | 30.00th=[10945], 40.00th=[12911], 50.00th=[15795], 60.00th=[17695], 00:19:25.483 | 70.00th=[20579], 80.00th=[28181], 90.00th=[38011], 95.00th=[40633], 00:19:25.483 | 99.00th=[48497], 99.50th=[48497], 99.90th=[56886], 99.95th=[56886], 00:19:25.483 | 99.99th=[56886] 00:19:25.483 bw ( KiB/s): min=12288, max=16384, per=27.50%, avg=14336.00, stdev=2896.31, samples=2 00:19:25.483 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:25.483 lat (msec) : 4=0.06%, 10=18.89%, 20=52.65%, 50=27.97%, 100=0.43% 00:19:25.483 cpu : usr=5.50%, sys=7.16%, ctx=313, majf=0, minf=1 00:19:25.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.483 issued rwts: total=3409,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.483 job2: (groupid=0, jobs=1): err= 0: pid=555870: Sat Jul 20 17:11:41 2024 00:19:25.483 read: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec) 00:19:25.483 slat (usec): min=2, max=9250, avg=117.12, stdev=593.30 00:19:25.483 clat (usec): min=6003, max=48566, avg=13573.12, stdev=6663.90 00:19:25.483 lat (usec): min=6083, max=48575, avg=13690.24, stdev=6717.92 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 6718], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9372], 00:19:25.483 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11600], 60.00th=[12256], 00:19:25.483 | 70.00th=[13698], 80.00th=[15401], 90.00th=[22938], 95.00th=[29230], 00:19:25.483 | 99.00th=[40109], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:19:25.483 | 99.99th=[48497] 00:19:25.483 write: IOPS=3392, BW=13.2MiB/s (13.9MB/s)(13.8MiB/1043msec); 0 zone resets 00:19:25.483 slat (usec): min=4, max=8635, avg=167.28, stdev=615.67 00:19:25.483 clat (usec): min=1187, max=87599, avg=25604.01, stdev=14502.03 00:19:25.483 lat (usec): min=1195, max=87612, avg=25771.29, stdev=14586.26 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 4359], 5.00th=[ 6652], 10.00th=[ 7898], 20.00th=[10683], 00:19:25.483 | 30.00th=[14615], 40.00th=[21103], 50.00th=[27395], 60.00th=[30016], 00:19:25.483 | 70.00th=[32113], 80.00th=[33162], 90.00th=[44303], 95.00th=[46400], 00:19:25.483 | 99.00th=[76022], 99.50th=[81265], 99.90th=[87557], 99.95th=[87557], 00:19:25.483 | 99.99th=[87557] 00:19:25.483 bw ( KiB/s): min=10944, max=16344, per=26.17%, avg=13644.00, stdev=3818.38, samples=2 00:19:25.483 iops : min= 2736, max= 4086, avg=3411.00, stdev=954.59, samples=2 00:19:25.483 lat (msec) : 2=0.05%, 4=0.29%, 10=23.15%, 20=37.73%, 50=37.02% 00:19:25.483 lat (msec) : 100=1.77% 00:19:25.483 cpu : usr=3.74%, sys=5.37%, ctx=544, majf=0, minf=1 00:19:25.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:25.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.483 issued rwts: total=3072,3538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.483 job3: (groupid=0, jobs=1): err= 0: pid=555871: Sat Jul 20 17:11:41 2024 00:19:25.483 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:19:25.483 slat (usec): min=2, max=73921, avg=183.28, stdev=2215.78 00:19:25.483 clat (usec): min=1195, max=160708, avg=28527.37, stdev=29394.77 00:19:25.483 lat (usec): min=1202, max=197461, avg=28710.65, stdev=29578.88 00:19:25.483 clat percentiles (usec): 00:19:25.483 | 1.00th=[ 1401], 5.00th=[ 2868], 10.00th=[ 9896], 20.00th=[ 13304], 00:19:25.483 | 30.00th=[ 15401], 40.00th=[ 17171], 50.00th=[ 20055], 60.00th=[ 23987], 00:19:25.483 | 70.00th=[ 28705], 80.00th=[ 33817], 90.00th=[ 41157], 95.00th=[ 86508], 00:19:25.483 | 99.00th=[137364], 99.50th=[160433], 99.90th=[160433], 99.95th=[160433], 00:19:25.483 | 99.99th=[160433] 00:19:25.483 write: IOPS=2921, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1010msec); 0 zone resets 00:19:25.483 slat (usec): min=3, max=16819, avg=135.43, stdev=840.59 00:19:25.483 clat (usec): min=1350, max=39420, avg=18666.80, stdev=5169.37 00:19:25.483 lat (usec): min=1359, max=39429, avg=18802.23, stdev=5218.29 00:19:25.483 clat percentiles (usec): 00:19:25.484 | 1.00th=[ 2802], 5.00th=[ 8225], 10.00th=[13435], 20.00th=[16712], 00:19:25.484 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17957], 60.00th=[19268], 00:19:25.484 | 70.00th=[20317], 80.00th=[21627], 90.00th=[25560], 95.00th=[27395], 00:19:25.484 | 99.00th=[33162], 99.50th=[36439], 99.90th=[39584], 99.95th=[39584], 00:19:25.484 | 99.99th=[39584] 00:19:25.484 bw ( KiB/s): min= 9872, max=12720, per=21.67%, avg=11296.00, stdev=2013.84, samples=2 00:19:25.484 iops : min= 2468, max= 3180, avg=2824.00, stdev=503.46, samples=2 00:19:25.484 lat (msec) : 2=2.21%, 4=3.08%, 10=2.30%, 20=51.97%, 50=35.80% 00:19:25.484 lat (msec) : 100=2.32%, 250=2.30% 00:19:25.484 cpu : usr=4.16%, sys=5.45%, ctx=316, majf=0, minf=1 00:19:25.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:25.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.484 issued rwts: total=2560,2951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.484 00:19:25.484 Run status group 0 (all jobs): 00:19:25.484 READ: bw=45.4MiB/s (47.6MB/s), 9.90MiB/s-13.1MiB/s (10.4MB/s-13.7MB/s), io=47.3MiB (49.6MB), run=1010-1043msec 00:19:25.484 WRITE: bw=50.9MiB/s (53.4MB/s), 11.4MiB/s-13.7MiB/s (12.0MB/s-14.4MB/s), io=53.1MiB (55.7MB), run=1010-1043msec 00:19:25.484 00:19:25.484 Disk stats (read/write): 00:19:25.484 nvme0n1: ios=2580/2888, merge=0/0, ticks=38752/53425, in_queue=92177, util=97.19% 00:19:25.484 nvme0n2: ios=2632/3072, merge=0/0, ticks=46008/57689, in_queue=103697, util=97.56% 00:19:25.484 nvme0n3: ios=2560/3054, merge=0/0, ticks=32236/67935, in_queue=100171, util=88.14% 00:19:25.484 nvme0n4: ios=2073/2554, merge=0/0, ticks=45938/24871, in_queue=70809, util=97.67% 00:19:25.484 17:11:41 -- target/fio.sh@55 -- # sync 00:19:25.484 17:11:41 -- target/fio.sh@59 -- # fio_pid=556009 00:19:25.484 17:11:41 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:25.484 17:11:41 -- target/fio.sh@61 -- # sleep 3 00:19:25.484 [global] 00:19:25.484 thread=1 00:19:25.484 invalidate=1 00:19:25.484 rw=read 00:19:25.484 time_based=1 00:19:25.484 runtime=10 00:19:25.484 ioengine=libaio 00:19:25.484 direct=1 00:19:25.484 bs=4096 00:19:25.484 iodepth=1 00:19:25.484 norandommap=1 00:19:25.484 numjobs=1 00:19:25.484 00:19:25.484 [job0] 00:19:25.484 filename=/dev/nvme0n1 00:19:25.484 [job1] 00:19:25.484 filename=/dev/nvme0n2 00:19:25.484 [job2] 00:19:25.484 filename=/dev/nvme0n3 00:19:25.484 [job3] 00:19:25.484 filename=/dev/nvme0n4 00:19:25.484 Could not set queue depth (nvme0n1) 00:19:25.484 Could not set queue depth (nvme0n2) 00:19:25.484 Could not set queue depth (nvme0n3) 00:19:25.484 Could not set queue depth (nvme0n4) 00:19:25.770 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.770 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.770 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.770 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.770 fio-3.35 00:19:25.770 Starting 4 threads 00:19:28.295 17:11:44 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:28.858 17:11:44 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:28.858 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3162112, buflen=4096 00:19:28.858 fio: pid=556227, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.858 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=311296, buflen=4096 00:19:28.858 fio: pid=556226, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.858 17:11:44 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.858 17:11:44 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:29.116 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=344064, buflen=4096 00:19:29.116 fio: pid=556184, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.116 17:11:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.116 17:11:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:29.373 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=17412096, buflen=4096 00:19:29.373 fio: pid=556207, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.373 17:11:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.373 17:11:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:29.373 00:19:29.373 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=556184: Sat Jul 20 17:11:45 2024 00:19:29.373 read: IOPS=25, BW=99.3KiB/s (102kB/s)(336KiB/3385msec) 00:19:29.373 slat (usec): min=12, max=9760, avg=169.45, stdev=1093.22 00:19:29.373 clat (usec): min=835, max=43088, avg=40102.23, stdev=6146.10 00:19:29.373 lat (usec): min=860, max=51014, avg=40273.29, stdev=6272.08 00:19:29.373 clat percentiles (usec): 00:19:29.373 | 1.00th=[ 832], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:29.373 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:29.373 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:29.373 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:29.373 | 99.99th=[43254] 00:19:29.373 bw ( KiB/s): min= 96, max= 104, per=1.72%, avg=98.67, stdev= 4.13, samples=6 00:19:29.373 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:19:29.373 lat (usec) : 1000=1.18% 00:19:29.373 lat (msec) : 2=1.18%, 50=96.47% 00:19:29.373 cpu : usr=0.12%, sys=0.00%, ctx=87, majf=0, minf=1 00:19:29.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.374 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=556207: Sat Jul 20 17:11:45 2024 00:19:29.374 read: IOPS=1165, BW=4662KiB/s (4774kB/s)(16.6MiB/3647msec) 00:19:29.374 slat (usec): min=5, max=7845, avg=20.09, stdev=120.32 00:19:29.374 clat (usec): min=415, max=42190, avg=833.07, stdev=3628.85 00:19:29.374 lat (usec): min=421, max=50036, avg=853.16, stdev=3652.12 00:19:29.374 clat percentiles (usec): 00:19:29.374 | 1.00th=[ 429], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 469], 00:19:29.374 | 30.00th=[ 474], 40.00th=[ 482], 50.00th=[ 490], 60.00th=[ 498], 00:19:29.374 | 70.00th=[ 510], 80.00th=[ 537], 90.00th=[ 644], 95.00th=[ 676], 00:19:29.374 | 99.00th=[ 840], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:29.374 | 99.99th=[42206] 00:19:29.374 bw ( KiB/s): min= 95, max= 8008, per=85.37%, avg=4853.57, stdev=3682.63, samples=7 00:19:29.374 iops : min= 23, max= 2002, avg=1213.29, stdev=920.82, samples=7 00:19:29.374 lat (usec) : 500=61.78%, 750=36.03%, 1000=1.39% 00:19:29.374 lat (msec) : 50=0.78% 00:19:29.374 cpu : usr=1.29%, sys=2.99%, ctx=4255, majf=0, minf=1 00:19:29.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 issued rwts: total=4252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.374 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=556226: Sat Jul 20 17:11:45 2024 00:19:29.374 read: IOPS=24, BW=97.3KiB/s (99.7kB/s)(304KiB/3123msec) 00:19:29.374 slat (nsec): min=13099, max=79160, avg=26574.21, stdev=11817.61 00:19:29.374 clat (usec): min=40725, max=42032, avg=40996.62, stdev=178.36 00:19:29.374 lat (usec): min=40804, max=42055, avg=41023.16, stdev=177.05 00:19:29.374 clat percentiles (usec): 00:19:29.374 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:29.374 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:29.374 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:29.374 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:29.374 | 99.99th=[42206] 00:19:29.374 bw ( KiB/s): min= 96, max= 104, per=1.71%, avg=97.33, stdev= 3.27, samples=6 00:19:29.374 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:19:29.374 lat (msec) : 50=98.70% 00:19:29.374 cpu : usr=0.13%, sys=0.00%, ctx=81, majf=0, minf=1 00:19:29.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.374 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=556227: Sat Jul 20 17:11:45 2024 00:19:29.374 read: IOPS=266, BW=1066KiB/s (1091kB/s)(3088KiB/2898msec) 00:19:29.374 slat (nsec): min=8869, max=64297, avg=26226.67, stdev=9316.34 00:19:29.374 clat (usec): min=577, max=42206, avg=3719.42, stdev=10536.12 00:19:29.374 lat (usec): min=610, max=42224, avg=3745.66, stdev=10535.03 00:19:29.374 clat percentiles (usec): 00:19:29.374 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 644], 20.00th=[ 660], 00:19:29.374 | 30.00th=[ 668], 40.00th=[ 685], 50.00th=[ 709], 60.00th=[ 725], 00:19:29.374 | 70.00th=[ 775], 80.00th=[ 857], 90.00th=[ 1237], 95.00th=[41157], 00:19:29.374 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:29.374 | 99.99th=[42206] 00:19:29.374 bw ( KiB/s): min= 104, max= 3120, per=21.44%, avg=1219.20, stdev=1245.58, samples=5 00:19:29.374 iops : min= 26, max= 780, avg=304.80, stdev=311.40, samples=5 00:19:29.374 lat (usec) : 750=66.36%, 1000=18.76% 00:19:29.374 lat (msec) : 2=7.37%, 50=7.37% 00:19:29.374 cpu : usr=0.38%, sys=0.72%, ctx=774, majf=0, minf=1 00:19:29.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.374 issued rwts: total=773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.374 00:19:29.374 Run status group 0 (all jobs): 00:19:29.374 READ: bw=5685KiB/s (5821kB/s), 97.3KiB/s-4662KiB/s (99.7kB/s-4774kB/s), io=20.2MiB (21.2MB), run=2898-3647msec 00:19:29.374 00:19:29.374 Disk stats (read/write): 00:19:29.374 nvme0n1: ios=82/0, merge=0/0, ticks=3288/0, in_queue=3288, util=95.31% 00:19:29.374 nvme0n2: ios=4249/0, merge=0/0, ticks=3398/0, in_queue=3398, util=96.19% 00:19:29.374 nvme0n3: ios=123/0, merge=0/0, ticks=4173/0, in_queue=4173, util=100.00% 00:19:29.374 nvme0n4: ios=770/0, merge=0/0, ticks=2781/0, in_queue=2781, util=96.73% 00:19:29.631 17:11:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.631 17:11:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:29.888 17:11:45 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.888 17:11:45 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:30.145 17:11:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.145 17:11:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:30.403 17:11:46 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.403 17:11:46 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:30.660 17:11:46 -- target/fio.sh@69 -- # fio_status=0 00:19:30.660 17:11:46 -- target/fio.sh@70 -- # wait 556009 00:19:30.660 17:11:46 -- target/fio.sh@70 -- # fio_status=4 00:19:30.660 17:11:46 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.917 17:11:46 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.917 17:11:46 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.917 17:11:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.917 17:11:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.917 17:11:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.917 17:11:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.917 17:11:46 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.917 17:11:46 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:30.917 17:11:46 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:30.917 nvmf hotplug test: fio failed as expected 00:19:30.917 17:11:46 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.175 17:11:47 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:31.175 17:11:47 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:31.175 17:11:47 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:31.175 17:11:47 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:31.175 17:11:47 -- target/fio.sh@91 -- # nvmftestfini 00:19:31.175 17:11:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:31.175 17:11:47 -- nvmf/common.sh@116 -- # sync 00:19:31.175 17:11:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:31.175 17:11:47 -- nvmf/common.sh@119 -- # set +e 00:19:31.175 17:11:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:31.175 17:11:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:31.175 rmmod nvme_tcp 00:19:31.175 rmmod nvme_fabrics 00:19:31.175 rmmod nvme_keyring 00:19:31.175 17:11:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:31.175 17:11:47 -- nvmf/common.sh@123 -- # set -e 00:19:31.175 17:11:47 -- nvmf/common.sh@124 -- # return 0 00:19:31.175 17:11:47 -- nvmf/common.sh@477 -- # '[' -n 554005 ']' 00:19:31.175 17:11:47 -- nvmf/common.sh@478 -- # killprocess 554005 00:19:31.175 17:11:47 -- common/autotest_common.sh@926 -- # '[' -z 554005 ']' 00:19:31.175 17:11:47 -- common/autotest_common.sh@930 -- # kill -0 554005 00:19:31.175 17:11:47 -- common/autotest_common.sh@931 -- # uname 00:19:31.175 17:11:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:31.175 17:11:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 554005 00:19:31.175 17:11:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:31.175 17:11:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:31.175 17:11:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 554005' 00:19:31.175 killing process with pid 554005 00:19:31.175 17:11:47 -- common/autotest_common.sh@945 -- # kill 554005 00:19:31.175 17:11:47 -- common/autotest_common.sh@950 -- # wait 554005 00:19:31.433 17:11:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.433 17:11:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.433 17:11:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.433 17:11:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.433 17:11:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.433 17:11:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.433 17:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.433 17:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.332 17:11:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:33.332 00:19:33.332 real 0m23.555s 00:19:33.332 user 1m21.809s 00:19:33.332 sys 0m6.266s 00:19:33.332 17:11:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.332 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:33.332 ************************************ 00:19:33.332 END TEST nvmf_fio_target 00:19:33.332 ************************************ 00:19:33.332 17:11:49 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:33.332 17:11:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:33.332 17:11:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.332 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:33.332 ************************************ 00:19:33.332 START TEST nvmf_bdevio 00:19:33.332 ************************************ 00:19:33.332 17:11:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:33.590 * Looking for test storage... 00:19:33.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.590 17:11:49 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.590 17:11:49 -- nvmf/common.sh@7 -- # uname -s 00:19:33.590 17:11:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.590 17:11:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.590 17:11:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.590 17:11:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.590 17:11:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.590 17:11:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.590 17:11:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.590 17:11:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.590 17:11:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.590 17:11:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.590 17:11:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.590 17:11:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.590 17:11:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.590 17:11:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.590 17:11:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.590 17:11:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.590 17:11:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.590 17:11:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.590 17:11:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.590 17:11:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.591 17:11:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.591 17:11:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.591 17:11:49 -- paths/export.sh@5 -- # export PATH 00:19:33.591 17:11:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.591 17:11:49 -- nvmf/common.sh@46 -- # : 0 00:19:33.591 17:11:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.591 17:11:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.591 17:11:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.591 17:11:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.591 17:11:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.591 17:11:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.591 17:11:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.591 17:11:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.591 17:11:49 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.591 17:11:49 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.591 17:11:49 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:33.591 17:11:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:33.591 17:11:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.591 17:11:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.591 17:11:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.591 17:11:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.591 17:11:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.591 17:11:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.591 17:11:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.591 17:11:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:33.591 17:11:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:33.591 17:11:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:33.591 17:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:35.488 17:11:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.488 17:11:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:35.488 17:11:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:35.488 17:11:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:35.488 17:11:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:35.488 17:11:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:35.488 17:11:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:35.488 17:11:51 -- nvmf/common.sh@294 -- # net_devs=() 00:19:35.488 17:11:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:35.488 17:11:51 -- nvmf/common.sh@295 -- # e810=() 00:19:35.488 17:11:51 -- nvmf/common.sh@295 -- # local -ga e810 00:19:35.488 17:11:51 -- nvmf/common.sh@296 -- # x722=() 00:19:35.488 17:11:51 -- nvmf/common.sh@296 -- # local -ga x722 00:19:35.488 17:11:51 -- nvmf/common.sh@297 -- # mlx=() 00:19:35.488 17:11:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:35.488 17:11:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.488 17:11:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:35.488 17:11:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:35.488 17:11:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:35.488 17:11:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.488 17:11:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.488 17:11:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.488 17:11:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.488 17:11:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:35.488 17:11:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.488 17:11:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.488 17:11:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.488 17:11:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.488 17:11:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.488 17:11:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.488 17:11:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.488 17:11:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.488 17:11:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.488 17:11:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.488 17:11:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.488 17:11:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.488 17:11:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:35.488 17:11:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:35.488 17:11:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:35.488 17:11:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.488 17:11:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.488 17:11:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.488 17:11:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:35.488 17:11:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.488 17:11:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.488 17:11:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:35.488 17:11:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.488 17:11:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.488 17:11:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:35.488 17:11:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:35.488 17:11:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.488 17:11:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.488 17:11:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.488 17:11:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.488 17:11:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:35.488 17:11:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.488 17:11:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.488 17:11:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.488 17:11:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:35.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:19:35.488 00:19:35.488 --- 10.0.0.2 ping statistics --- 00:19:35.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.488 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:19:35.488 17:11:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:19:35.488 00:19:35.488 --- 10.0.0.1 ping statistics --- 00:19:35.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.488 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:35.488 17:11:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.488 17:11:51 -- nvmf/common.sh@410 -- # return 0 00:19:35.488 17:11:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:35.488 17:11:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.488 17:11:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:35.488 17:11:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.488 17:11:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:35.488 17:11:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:35.488 17:11:51 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:35.488 17:11:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:35.488 17:11:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:35.488 17:11:51 -- common/autotest_common.sh@10 -- # set +x 00:19:35.488 17:11:51 -- nvmf/common.sh@469 -- # nvmfpid=558750 00:19:35.488 17:11:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:35.488 17:11:51 -- nvmf/common.sh@470 -- # waitforlisten 558750 00:19:35.488 17:11:51 -- common/autotest_common.sh@819 -- # '[' -z 558750 ']' 00:19:35.488 17:11:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.488 17:11:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:35.488 17:11:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.488 17:11:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:35.488 17:11:51 -- common/autotest_common.sh@10 -- # set +x 00:19:35.488 [2024-07-20 17:11:51.635843] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:35.488 [2024-07-20 17:11:51.635924] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.745 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.745 [2024-07-20 17:11:51.704161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.745 [2024-07-20 17:11:51.790713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.745 [2024-07-20 17:11:51.790886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.745 [2024-07-20 17:11:51.790907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.745 [2024-07-20 17:11:51.790920] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.745 [2024-07-20 17:11:51.791008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:35.745 [2024-07-20 17:11:51.791062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:35.745 [2024-07-20 17:11:51.791120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:35.745 [2024-07-20 17:11:51.791124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.713 17:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.713 17:11:52 -- common/autotest_common.sh@852 -- # return 0 00:19:36.713 17:11:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:36.713 17:11:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:36.713 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 17:11:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.713 17:11:52 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.713 17:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.713 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 [2024-07-20 17:11:52.610492] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.713 17:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.713 17:11:52 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.713 17:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.713 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 Malloc0 00:19:36.713 17:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.713 17:11:52 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.713 17:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.713 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 17:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.713 17:11:52 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.713 17:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.713 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 17:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.713 17:11:52 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.713 17:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.713 17:11:52 -- common/autotest_common.sh@10 -- # set +x 00:19:36.713 [2024-07-20 17:11:52.661614] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.713 17:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.713 17:11:52 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:36.713 17:11:52 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:36.713 17:11:52 -- nvmf/common.sh@520 -- # config=() 00:19:36.713 17:11:52 -- nvmf/common.sh@520 -- # local subsystem config 00:19:36.713 17:11:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:36.713 17:11:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:36.713 { 00:19:36.713 "params": { 00:19:36.713 "name": "Nvme$subsystem", 00:19:36.713 "trtype": "$TEST_TRANSPORT", 00:19:36.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.713 "adrfam": "ipv4", 00:19:36.713 "trsvcid": "$NVMF_PORT", 00:19:36.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.713 "hdgst": ${hdgst:-false}, 00:19:36.713 "ddgst": ${ddgst:-false} 00:19:36.713 }, 00:19:36.713 "method": "bdev_nvme_attach_controller" 00:19:36.713 } 00:19:36.713 EOF 00:19:36.713 )") 00:19:36.713 17:11:52 -- nvmf/common.sh@542 -- # cat 00:19:36.713 17:11:52 -- nvmf/common.sh@544 -- # jq . 00:19:36.713 17:11:52 -- nvmf/common.sh@545 -- # IFS=, 00:19:36.713 17:11:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:36.713 "params": { 00:19:36.713 "name": "Nvme1", 00:19:36.713 "trtype": "tcp", 00:19:36.713 "traddr": "10.0.0.2", 00:19:36.713 "adrfam": "ipv4", 00:19:36.713 "trsvcid": "4420", 00:19:36.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.713 "hdgst": false, 00:19:36.713 "ddgst": false 00:19:36.713 }, 00:19:36.713 "method": "bdev_nvme_attach_controller" 00:19:36.713 }' 00:19:36.713 [2024-07-20 17:11:52.702351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:36.713 [2024-07-20 17:11:52.702436] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid558909 ] 00:19:36.713 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.713 [2024-07-20 17:11:52.763502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.713 [2024-07-20 17:11:52.849971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.713 [2024-07-20 17:11:52.850023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.713 [2024-07-20 17:11:52.850026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.276 [2024-07-20 17:11:53.143896] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:37.276 [2024-07-20 17:11:53.143950] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:37.276 I/O targets: 00:19:37.276 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:37.276 00:19:37.276 00:19:37.276 CUnit - A unit testing framework for C - Version 2.1-3 00:19:37.276 http://cunit.sourceforge.net/ 00:19:37.276 00:19:37.276 00:19:37.276 Suite: bdevio tests on: Nvme1n1 00:19:37.276 Test: blockdev write read block ...passed 00:19:37.276 Test: blockdev write zeroes read block ...passed 00:19:37.276 Test: blockdev write zeroes read no split ...passed 00:19:37.276 Test: blockdev write zeroes read split ...passed 00:19:37.276 Test: blockdev write zeroes read split partial ...passed 00:19:37.276 Test: blockdev reset ...[2024-07-20 17:11:53.379745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.276 [2024-07-20 17:11:53.379872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23d3e00 (9): Bad file descriptor 00:19:37.276 [2024-07-20 17:11:53.394094] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:37.276 passed 00:19:37.276 Test: blockdev write read 8 blocks ...passed 00:19:37.276 Test: blockdev write read size > 128k ...passed 00:19:37.276 Test: blockdev write read invalid size ...passed 00:19:37.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:37.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:37.532 Test: blockdev write read max offset ...passed 00:19:37.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:37.532 Test: blockdev writev readv 8 blocks ...passed 00:19:37.532 Test: blockdev writev readv 30 x 1block ...passed 00:19:37.532 Test: blockdev writev readv block ...passed 00:19:37.532 Test: blockdev writev readv size > 128k ...passed 00:19:37.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:37.532 Test: blockdev comparev and writev ...[2024-07-20 17:11:53.574688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.532 [2024-07-20 17:11:53.574726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.532 [2024-07-20 17:11:53.574750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.532 [2024-07-20 17:11:53.574768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:37.532 [2024-07-20 17:11:53.575215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.532 [2024-07-20 17:11:53.575239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:37.532 [2024-07-20 17:11:53.575262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.532 [2024-07-20 17:11:53.575279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:37.532 [2024-07-20 17:11:53.575710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.533 [2024-07-20 17:11:53.575736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:37.533 [2024-07-20 17:11:53.575758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.533 [2024-07-20 17:11:53.575776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:37.533 [2024-07-20 17:11:53.576219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.533 [2024-07-20 17:11:53.576244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:37.533 [2024-07-20 17:11:53.576267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.533 [2024-07-20 17:11:53.576284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:37.533 passed 00:19:37.533 Test: blockdev nvme passthru rw ...passed 00:19:37.533 Test: blockdev nvme passthru vendor specific ...[2024-07-20 17:11:53.660246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.533 [2024-07-20 17:11:53.660273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:37.533 [2024-07-20 17:11:53.660526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.533 [2024-07-20 17:11:53.660550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:37.533 [2024-07-20 17:11:53.660807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.533 [2024-07-20 17:11:53.660831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:37.533 [2024-07-20 17:11:53.661086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.533 [2024-07-20 17:11:53.661111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:37.533 passed 00:19:37.533 Test: blockdev nvme admin passthru ...passed 00:19:37.789 Test: blockdev copy ...passed 00:19:37.789 00:19:37.789 Run Summary: Type Total Ran Passed Failed Inactive 00:19:37.789 suites 1 1 n/a 0 0 00:19:37.790 tests 23 23 23 0 0 00:19:37.790 asserts 152 152 152 0 n/a 00:19:37.790 00:19:37.790 Elapsed time = 1.154 seconds 00:19:37.790 17:11:53 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.790 17:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.790 17:11:53 -- common/autotest_common.sh@10 -- # set +x 00:19:37.790 17:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.790 17:11:53 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:37.790 17:11:53 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:37.790 17:11:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:37.790 17:11:53 -- nvmf/common.sh@116 -- # sync 00:19:37.790 17:11:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:37.790 17:11:53 -- nvmf/common.sh@119 -- # set +e 00:19:37.790 17:11:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:37.790 17:11:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:37.790 rmmod nvme_tcp 00:19:38.046 rmmod nvme_fabrics 00:19:38.046 rmmod nvme_keyring 00:19:38.046 17:11:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.046 17:11:53 -- nvmf/common.sh@123 -- # set -e 00:19:38.046 17:11:53 -- nvmf/common.sh@124 -- # return 0 00:19:38.046 17:11:53 -- nvmf/common.sh@477 -- # '[' -n 558750 ']' 00:19:38.046 17:11:53 -- nvmf/common.sh@478 -- # killprocess 558750 00:19:38.046 17:11:53 -- common/autotest_common.sh@926 -- # '[' -z 558750 ']' 00:19:38.046 17:11:53 -- common/autotest_common.sh@930 -- # kill -0 558750 00:19:38.046 17:11:53 -- common/autotest_common.sh@931 -- # uname 00:19:38.046 17:11:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.046 17:11:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 558750 00:19:38.046 17:11:54 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:38.046 17:11:54 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:38.046 17:11:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 558750' 00:19:38.046 killing process with pid 558750 00:19:38.046 17:11:54 -- common/autotest_common.sh@945 -- # kill 558750 00:19:38.046 17:11:54 -- common/autotest_common.sh@950 -- # wait 558750 00:19:38.304 17:11:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:38.304 17:11:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:38.304 17:11:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:38.304 17:11:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.304 17:11:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:38.304 17:11:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.304 17:11:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.304 17:11:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.201 17:11:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:40.201 00:19:40.201 real 0m6.825s 00:19:40.201 user 0m12.984s 00:19:40.201 sys 0m2.040s 00:19:40.201 17:11:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.201 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:40.201 ************************************ 00:19:40.201 END TEST nvmf_bdevio 00:19:40.201 ************************************ 00:19:40.201 17:11:56 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:40.202 17:11:56 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:40.202 17:11:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:40.202 17:11:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:40.202 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:40.202 ************************************ 00:19:40.202 START TEST nvmf_bdevio_no_huge 00:19:40.202 ************************************ 00:19:40.202 17:11:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:40.459 * Looking for test storage... 00:19:40.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:40.459 17:11:56 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.459 17:11:56 -- nvmf/common.sh@7 -- # uname -s 00:19:40.459 17:11:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.459 17:11:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.459 17:11:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.459 17:11:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.459 17:11:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.459 17:11:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.459 17:11:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.459 17:11:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.459 17:11:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.459 17:11:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.459 17:11:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.459 17:11:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.459 17:11:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.459 17:11:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.459 17:11:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.459 17:11:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.459 17:11:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.459 17:11:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.459 17:11:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.459 17:11:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.459 17:11:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.459 17:11:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.459 17:11:56 -- paths/export.sh@5 -- # export PATH 00:19:40.459 17:11:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.459 17:11:56 -- nvmf/common.sh@46 -- # : 0 00:19:40.459 17:11:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:40.459 17:11:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:40.459 17:11:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:40.459 17:11:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.459 17:11:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.459 17:11:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:40.459 17:11:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:40.459 17:11:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:40.459 17:11:56 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:40.459 17:11:56 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:40.459 17:11:56 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:40.459 17:11:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:40.459 17:11:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.459 17:11:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:40.459 17:11:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:40.459 17:11:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:40.459 17:11:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.459 17:11:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.459 17:11:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.459 17:11:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:40.459 17:11:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:40.459 17:11:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:40.459 17:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:42.386 17:11:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:42.386 17:11:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:42.386 17:11:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:42.386 17:11:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:42.386 17:11:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:42.386 17:11:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:42.386 17:11:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:42.386 17:11:58 -- nvmf/common.sh@294 -- # net_devs=() 00:19:42.386 17:11:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:42.386 17:11:58 -- nvmf/common.sh@295 -- # e810=() 00:19:42.386 17:11:58 -- nvmf/common.sh@295 -- # local -ga e810 00:19:42.386 17:11:58 -- nvmf/common.sh@296 -- # x722=() 00:19:42.386 17:11:58 -- nvmf/common.sh@296 -- # local -ga x722 00:19:42.386 17:11:58 -- nvmf/common.sh@297 -- # mlx=() 00:19:42.386 17:11:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:42.386 17:11:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.386 17:11:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:42.386 17:11:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:42.386 17:11:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:42.386 17:11:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.386 17:11:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:42.386 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:42.386 17:11:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.386 17:11:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:42.386 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:42.386 17:11:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:42.386 17:11:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:42.386 17:11:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.386 17:11:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.386 17:11:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.386 17:11:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.386 17:11:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:42.386 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:42.386 17:11:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.386 17:11:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.386 17:11:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.386 17:11:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.386 17:11:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.387 17:11:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:42.387 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:42.387 17:11:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.387 17:11:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:42.387 17:11:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:42.387 17:11:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:42.387 17:11:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:42.387 17:11:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:42.387 17:11:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.387 17:11:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.387 17:11:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.387 17:11:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:42.387 17:11:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.387 17:11:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.387 17:11:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:42.387 17:11:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.387 17:11:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.387 17:11:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:42.387 17:11:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:42.387 17:11:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.387 17:11:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.387 17:11:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.387 17:11:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.387 17:11:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:42.387 17:11:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.645 17:11:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.645 17:11:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.645 17:11:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:42.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:19:42.645 00:19:42.645 --- 10.0.0.2 ping statistics --- 00:19:42.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.645 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:19:42.645 17:11:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:19:42.645 00:19:42.645 --- 10.0.0.1 ping statistics --- 00:19:42.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.645 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:42.645 17:11:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.645 17:11:58 -- nvmf/common.sh@410 -- # return 0 00:19:42.645 17:11:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:42.645 17:11:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.645 17:11:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:42.645 17:11:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:42.645 17:11:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.645 17:11:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:42.645 17:11:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:42.645 17:11:58 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:42.645 17:11:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:42.645 17:11:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:42.645 17:11:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.645 17:11:58 -- nvmf/common.sh@469 -- # nvmfpid=561025 00:19:42.645 17:11:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:42.645 17:11:58 -- nvmf/common.sh@470 -- # waitforlisten 561025 00:19:42.645 17:11:58 -- common/autotest_common.sh@819 -- # '[' -z 561025 ']' 00:19:42.645 17:11:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.645 17:11:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:42.645 17:11:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.645 17:11:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:42.645 17:11:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.645 [2024-07-20 17:11:58.675269] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:42.645 [2024-07-20 17:11:58.675343] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:42.645 [2024-07-20 17:11:58.746296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.903 [2024-07-20 17:11:58.835897] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:42.903 [2024-07-20 17:11:58.836055] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.903 [2024-07-20 17:11:58.836075] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.903 [2024-07-20 17:11:58.836090] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.903 [2024-07-20 17:11:58.836173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:42.903 [2024-07-20 17:11:58.836264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:42.903 [2024-07-20 17:11:58.836361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:42.903 [2024-07-20 17:11:58.836364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.836 17:11:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.836 17:11:59 -- common/autotest_common.sh@852 -- # return 0 00:19:43.836 17:11:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.836 17:11:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.836 17:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 17:11:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.836 17:11:59 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.836 17:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.836 17:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 [2024-07-20 17:11:59.715194] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.836 17:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.836 17:11:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:43.836 17:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.836 17:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 Malloc0 00:19:43.836 17:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.836 17:11:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.836 17:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.836 17:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 17:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.836 17:11:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.836 17:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.836 17:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 17:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.836 17:11:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.836 17:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.836 17:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:43.836 [2024-07-20 17:11:59.752972] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.836 17:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.836 17:11:59 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:43.836 17:11:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:43.836 17:11:59 -- nvmf/common.sh@520 -- # config=() 00:19:43.836 17:11:59 -- nvmf/common.sh@520 -- # local subsystem config 00:19:43.836 17:11:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:43.836 17:11:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:43.836 { 00:19:43.836 "params": { 00:19:43.836 "name": "Nvme$subsystem", 00:19:43.836 "trtype": "$TEST_TRANSPORT", 00:19:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.836 "adrfam": "ipv4", 00:19:43.836 "trsvcid": "$NVMF_PORT", 00:19:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.836 "hdgst": ${hdgst:-false}, 00:19:43.836 "ddgst": ${ddgst:-false} 00:19:43.836 }, 00:19:43.836 "method": "bdev_nvme_attach_controller" 00:19:43.836 } 00:19:43.836 EOF 00:19:43.836 )") 00:19:43.836 17:11:59 -- nvmf/common.sh@542 -- # cat 00:19:43.836 17:11:59 -- nvmf/common.sh@544 -- # jq . 00:19:43.836 17:11:59 -- nvmf/common.sh@545 -- # IFS=, 00:19:43.836 17:11:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:43.836 "params": { 00:19:43.836 "name": "Nvme1", 00:19:43.836 "trtype": "tcp", 00:19:43.836 "traddr": "10.0.0.2", 00:19:43.836 "adrfam": "ipv4", 00:19:43.836 "trsvcid": "4420", 00:19:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.836 "hdgst": false, 00:19:43.836 "ddgst": false 00:19:43.836 }, 00:19:43.836 "method": "bdev_nvme_attach_controller" 00:19:43.836 }' 00:19:43.836 [2024-07-20 17:11:59.799140] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:43.836 [2024-07-20 17:11:59.799239] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid561177 ] 00:19:43.836 [2024-07-20 17:11:59.866290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.836 [2024-07-20 17:11:59.951084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.836 [2024-07-20 17:11:59.951138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.836 [2024-07-20 17:11:59.951141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.094 [2024-07-20 17:12:00.099091] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:44.094 [2024-07-20 17:12:00.099142] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:44.094 I/O targets: 00:19:44.094 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:44.094 00:19:44.094 00:19:44.094 CUnit - A unit testing framework for C - Version 2.1-3 00:19:44.094 http://cunit.sourceforge.net/ 00:19:44.094 00:19:44.094 00:19:44.094 Suite: bdevio tests on: Nvme1n1 00:19:44.094 Test: blockdev write read block ...passed 00:19:44.094 Test: blockdev write zeroes read block ...passed 00:19:44.094 Test: blockdev write zeroes read no split ...passed 00:19:44.351 Test: blockdev write zeroes read split ...passed 00:19:44.351 Test: blockdev write zeroes read split partial ...passed 00:19:44.351 Test: blockdev reset ...[2024-07-20 17:12:00.329312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.352 [2024-07-20 17:12:00.329414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e720 (9): Bad file descriptor 00:19:44.352 [2024-07-20 17:12:00.388010] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:44.352 passed 00:19:44.352 Test: blockdev write read 8 blocks ...passed 00:19:44.352 Test: blockdev write read size > 128k ...passed 00:19:44.352 Test: blockdev write read invalid size ...passed 00:19:44.352 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:44.352 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:44.352 Test: blockdev write read max offset ...passed 00:19:44.609 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:44.609 Test: blockdev writev readv 8 blocks ...passed 00:19:44.609 Test: blockdev writev readv 30 x 1block ...passed 00:19:44.609 Test: blockdev writev readv block ...passed 00:19:44.609 Test: blockdev writev readv size > 128k ...passed 00:19:44.609 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:44.609 Test: blockdev comparev and writev ...[2024-07-20 17:12:00.609309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.609345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.609369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.609385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.609869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.609894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.609916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.609937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.610412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.610435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.610456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.610472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.610953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.610977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.610998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.609 [2024-07-20 17:12:00.611014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:44.609 passed 00:19:44.609 Test: blockdev nvme passthru rw ...passed 00:19:44.609 Test: blockdev nvme passthru vendor specific ...[2024-07-20 17:12:00.694305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.609 [2024-07-20 17:12:00.694331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.694622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.609 [2024-07-20 17:12:00.694645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.694912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.609 [2024-07-20 17:12:00.694936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:44.609 [2024-07-20 17:12:00.695195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.609 [2024-07-20 17:12:00.695218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:44.609 passed 00:19:44.609 Test: blockdev nvme admin passthru ...passed 00:19:44.609 Test: blockdev copy ...passed 00:19:44.609 00:19:44.609 Run Summary: Type Total Ran Passed Failed Inactive 00:19:44.609 suites 1 1 n/a 0 0 00:19:44.609 tests 23 23 23 0 0 00:19:44.609 asserts 152 152 152 0 n/a 00:19:44.609 00:19:44.609 Elapsed time = 1.305 seconds 00:19:45.175 17:12:01 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.175 17:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.175 17:12:01 -- common/autotest_common.sh@10 -- # set +x 00:19:45.175 17:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.175 17:12:01 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:45.175 17:12:01 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:45.175 17:12:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:45.175 17:12:01 -- nvmf/common.sh@116 -- # sync 00:19:45.175 17:12:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:45.175 17:12:01 -- nvmf/common.sh@119 -- # set +e 00:19:45.175 17:12:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:45.175 17:12:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:45.175 rmmod nvme_tcp 00:19:45.175 rmmod nvme_fabrics 00:19:45.175 rmmod nvme_keyring 00:19:45.175 17:12:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:45.175 17:12:01 -- nvmf/common.sh@123 -- # set -e 00:19:45.175 17:12:01 -- nvmf/common.sh@124 -- # return 0 00:19:45.175 17:12:01 -- nvmf/common.sh@477 -- # '[' -n 561025 ']' 00:19:45.175 17:12:01 -- nvmf/common.sh@478 -- # killprocess 561025 00:19:45.175 17:12:01 -- common/autotest_common.sh@926 -- # '[' -z 561025 ']' 00:19:45.175 17:12:01 -- common/autotest_common.sh@930 -- # kill -0 561025 00:19:45.175 17:12:01 -- common/autotest_common.sh@931 -- # uname 00:19:45.175 17:12:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:45.175 17:12:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 561025 00:19:45.175 17:12:01 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:45.175 17:12:01 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:45.175 17:12:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 561025' 00:19:45.175 killing process with pid 561025 00:19:45.175 17:12:01 -- common/autotest_common.sh@945 -- # kill 561025 00:19:45.175 17:12:01 -- common/autotest_common.sh@950 -- # wait 561025 00:19:45.434 17:12:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:45.434 17:12:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.434 17:12:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.434 17:12:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.434 17:12:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.434 17:12:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.434 17:12:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.434 17:12:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.963 17:12:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:47.963 00:19:47.963 real 0m7.237s 00:19:47.963 user 0m13.483s 00:19:47.963 sys 0m2.566s 00:19:47.963 17:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.963 17:12:03 -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 ************************************ 00:19:47.963 END TEST nvmf_bdevio_no_huge 00:19:47.963 ************************************ 00:19:47.963 17:12:03 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.963 17:12:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:47.963 17:12:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.963 17:12:03 -- common/autotest_common.sh@10 -- # set +x 00:19:47.963 ************************************ 00:19:47.963 START TEST nvmf_tls 00:19:47.963 ************************************ 00:19:47.963 17:12:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.963 * Looking for test storage... 00:19:47.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.963 17:12:03 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.963 17:12:03 -- nvmf/common.sh@7 -- # uname -s 00:19:47.963 17:12:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.963 17:12:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.963 17:12:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.963 17:12:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.963 17:12:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.963 17:12:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.963 17:12:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.963 17:12:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.963 17:12:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.963 17:12:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.963 17:12:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.963 17:12:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.963 17:12:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.963 17:12:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.963 17:12:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.963 17:12:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.963 17:12:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.963 17:12:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.963 17:12:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.963 17:12:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 17:12:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 17:12:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 17:12:03 -- paths/export.sh@5 -- # export PATH 00:19:47.963 17:12:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.963 17:12:03 -- nvmf/common.sh@46 -- # : 0 00:19:47.963 17:12:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.963 17:12:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.963 17:12:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.963 17:12:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.963 17:12:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.963 17:12:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.963 17:12:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.963 17:12:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.963 17:12:03 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.963 17:12:03 -- target/tls.sh@71 -- # nvmftestinit 00:19:47.963 17:12:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.963 17:12:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.963 17:12:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.963 17:12:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.963 17:12:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.963 17:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.963 17:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.963 17:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.963 17:12:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:47.963 17:12:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:47.963 17:12:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:47.963 17:12:03 -- common/autotest_common.sh@10 -- # set +x 00:19:49.859 17:12:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.859 17:12:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.859 17:12:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.859 17:12:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.859 17:12:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.859 17:12:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.859 17:12:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.859 17:12:05 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.859 17:12:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.859 17:12:05 -- nvmf/common.sh@295 -- # e810=() 00:19:49.859 17:12:05 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.859 17:12:05 -- nvmf/common.sh@296 -- # x722=() 00:19:49.859 17:12:05 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.859 17:12:05 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.859 17:12:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.859 17:12:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.859 17:12:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.859 17:12:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:49.859 17:12:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.859 17:12:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.859 17:12:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:49.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:49.859 17:12:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.859 17:12:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:49.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:49.859 17:12:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.859 17:12:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:49.859 17:12:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.859 17:12:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.859 17:12:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.859 17:12:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.859 17:12:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:49.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:49.859 17:12:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.859 17:12:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.859 17:12:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.859 17:12:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.859 17:12:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.859 17:12:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:49.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:49.860 17:12:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.860 17:12:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.860 17:12:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.860 17:12:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.860 17:12:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:49.860 17:12:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:49.860 17:12:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.860 17:12:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.860 17:12:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.860 17:12:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:49.860 17:12:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.860 17:12:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.860 17:12:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:49.860 17:12:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.860 17:12:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.860 17:12:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:49.860 17:12:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:49.860 17:12:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.860 17:12:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.860 17:12:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.860 17:12:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.860 17:12:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:49.860 17:12:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.860 17:12:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.860 17:12:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.860 17:12:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:49.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:19:49.860 00:19:49.860 --- 10.0.0.2 ping statistics --- 00:19:49.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.860 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:19:49.860 17:12:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:19:49.860 00:19:49.860 --- 10.0.0.1 ping statistics --- 00:19:49.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.860 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:19:49.860 17:12:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.860 17:12:05 -- nvmf/common.sh@410 -- # return 0 00:19:49.860 17:12:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.860 17:12:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.860 17:12:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.860 17:12:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.860 17:12:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.860 17:12:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.860 17:12:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.860 17:12:05 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:49.860 17:12:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.860 17:12:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:49.860 17:12:05 -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 17:12:05 -- nvmf/common.sh@469 -- # nvmfpid=563474 00:19:49.860 17:12:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:49.860 17:12:05 -- nvmf/common.sh@470 -- # waitforlisten 563474 00:19:49.860 17:12:05 -- common/autotest_common.sh@819 -- # '[' -z 563474 ']' 00:19:49.860 17:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.860 17:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.860 17:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.860 17:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.860 17:12:05 -- common/autotest_common.sh@10 -- # set +x 00:19:49.860 [2024-07-20 17:12:05.932245] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:49.860 [2024-07-20 17:12:05.932326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.860 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.860 [2024-07-20 17:12:06.005221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.117 [2024-07-20 17:12:06.093318] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.117 [2024-07-20 17:12:06.093486] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.117 [2024-07-20 17:12:06.093506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.117 [2024-07-20 17:12:06.093529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.117 [2024-07-20 17:12:06.093561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.117 17:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.117 17:12:06 -- common/autotest_common.sh@852 -- # return 0 00:19:50.117 17:12:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:50.117 17:12:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:50.117 17:12:06 -- common/autotest_common.sh@10 -- # set +x 00:19:50.117 17:12:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.117 17:12:06 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:50.117 17:12:06 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:50.374 true 00:19:50.374 17:12:06 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.374 17:12:06 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:50.632 17:12:06 -- target/tls.sh@82 -- # version=0 00:19:50.632 17:12:06 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:50.632 17:12:06 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:50.890 17:12:06 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.890 17:12:06 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:51.148 17:12:07 -- target/tls.sh@90 -- # version=13 00:19:51.148 17:12:07 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:51.148 17:12:07 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:51.406 17:12:07 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.406 17:12:07 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:51.663 17:12:07 -- target/tls.sh@98 -- # version=7 00:19:51.663 17:12:07 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:51.663 17:12:07 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.663 17:12:07 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:51.939 17:12:07 -- target/tls.sh@105 -- # ktls=false 00:19:51.939 17:12:07 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:51.939 17:12:07 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:52.197 17:12:08 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.197 17:12:08 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:52.454 17:12:08 -- target/tls.sh@113 -- # ktls=true 00:19:52.454 17:12:08 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:52.455 17:12:08 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:52.712 17:12:08 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.712 17:12:08 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:52.970 17:12:08 -- target/tls.sh@121 -- # ktls=false 00:19:52.970 17:12:08 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:52.970 17:12:08 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:52.970 17:12:08 -- target/tls.sh@49 -- # local key hash crc 00:19:52.970 17:12:08 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:52.970 17:12:08 -- target/tls.sh@51 -- # hash=01 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # gzip -1 -c 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # tail -c8 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # head -c 4 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # crc='p$H�' 00:19:52.970 17:12:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:52.970 17:12:08 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:52.970 17:12:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.970 17:12:08 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.970 17:12:08 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:52.970 17:12:08 -- target/tls.sh@49 -- # local key hash crc 00:19:52.970 17:12:08 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:52.970 17:12:08 -- target/tls.sh@51 -- # hash=01 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # gzip -1 -c 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # tail -c8 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # head -c 4 00:19:52.970 17:12:08 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:52.970 17:12:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:52.970 17:12:08 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:52.970 17:12:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.970 17:12:08 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.970 17:12:08 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.970 17:12:08 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:52.970 17:12:08 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.970 17:12:08 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.970 17:12:08 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.970 17:12:08 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:52.970 17:12:08 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:53.226 17:12:09 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:53.484 17:12:09 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.484 17:12:09 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.484 17:12:09 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.741 [2024-07-20 17:12:09.772473] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.741 17:12:09 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.999 17:12:10 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.256 [2024-07-20 17:12:10.301967] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.256 [2024-07-20 17:12:10.302238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.256 17:12:10 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.513 malloc0 00:19:54.513 17:12:10 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.770 17:12:10 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:55.027 17:12:11 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:55.027 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.021 Initializing NVMe Controllers 00:20:05.021 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.021 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.021 Initialization complete. Launching workers. 00:20:05.021 ======================================================== 00:20:05.021 Latency(us) 00:20:05.021 Device Information : IOPS MiB/s Average min max 00:20:05.021 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7703.30 30.09 8310.83 1122.65 9006.88 00:20:05.021 ======================================================== 00:20:05.021 Total : 7703.30 30.09 8310.83 1122.65 9006.88 00:20:05.021 00:20:05.021 17:12:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:05.021 17:12:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.021 17:12:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.021 17:12:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.021 17:12:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:05.021 17:12:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.021 17:12:21 -- target/tls.sh@28 -- # bdevperf_pid=565817 00:20:05.021 17:12:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.022 17:12:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.022 17:12:21 -- target/tls.sh@31 -- # waitforlisten 565817 /var/tmp/bdevperf.sock 00:20:05.022 17:12:21 -- common/autotest_common.sh@819 -- # '[' -z 565817 ']' 00:20:05.022 17:12:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.022 17:12:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:05.022 17:12:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.022 17:12:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:05.022 17:12:21 -- common/autotest_common.sh@10 -- # set +x 00:20:05.022 [2024-07-20 17:12:21.157054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:05.022 [2024-07-20 17:12:21.157129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid565817 ] 00:20:05.279 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.279 [2024-07-20 17:12:21.214710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.279 [2024-07-20 17:12:21.301716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.211 17:12:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:06.211 17:12:22 -- common/autotest_common.sh@852 -- # return 0 00:20:06.211 17:12:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:06.211 [2024-07-20 17:12:22.328727] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.470 TLSTESTn1 00:20:06.470 17:12:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:06.470 Running I/O for 10 seconds... 00:20:18.654 00:20:18.654 Latency(us) 00:20:18.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.654 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.654 Verification LBA range: start 0x0 length 0x2000 00:20:18.654 TLSTESTn1 : 10.06 982.96 3.84 0.00 0.00 129931.77 4975.88 158451.48 00:20:18.654 =================================================================================================================== 00:20:18.654 Total : 982.96 3.84 0.00 0.00 129931.77 4975.88 158451.48 00:20:18.654 0 00:20:18.654 17:12:32 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.654 17:12:32 -- target/tls.sh@45 -- # killprocess 565817 00:20:18.654 17:12:32 -- common/autotest_common.sh@926 -- # '[' -z 565817 ']' 00:20:18.654 17:12:32 -- common/autotest_common.sh@930 -- # kill -0 565817 00:20:18.654 17:12:32 -- common/autotest_common.sh@931 -- # uname 00:20:18.654 17:12:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.654 17:12:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 565817 00:20:18.654 17:12:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.654 17:12:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.654 17:12:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 565817' 00:20:18.654 killing process with pid 565817 00:20:18.654 17:12:32 -- common/autotest_common.sh@945 -- # kill 565817 00:20:18.654 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.654 00:20:18.654 Latency(us) 00:20:18.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.654 =================================================================================================================== 00:20:18.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.654 17:12:32 -- common/autotest_common.sh@950 -- # wait 565817 00:20:18.654 17:12:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:18.654 17:12:32 -- common/autotest_common.sh@640 -- # local es=0 00:20:18.654 17:12:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:18.654 17:12:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:18.654 17:12:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.655 17:12:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:18.655 17:12:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.655 17:12:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:18.655 17:12:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.655 17:12:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.655 17:12:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.655 17:12:32 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:18.655 17:12:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.655 17:12:32 -- target/tls.sh@28 -- # bdevperf_pid=567305 00:20:18.655 17:12:32 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.655 17:12:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.655 17:12:32 -- target/tls.sh@31 -- # waitforlisten 567305 /var/tmp/bdevperf.sock 00:20:18.655 17:12:32 -- common/autotest_common.sh@819 -- # '[' -z 567305 ']' 00:20:18.655 17:12:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.655 17:12:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.655 17:12:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.655 17:12:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.655 17:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:18.655 [2024-07-20 17:12:32.920955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:18.655 [2024-07-20 17:12:32.921052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567305 ] 00:20:18.655 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.655 [2024-07-20 17:12:32.979233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.655 [2024-07-20 17:12:33.060232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.655 17:12:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.655 17:12:33 -- common/autotest_common.sh@852 -- # return 0 00:20:18.655 17:12:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:18.655 [2024-07-20 17:12:33.424321] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.655 [2024-07-20 17:12:33.433324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.655 [2024-07-20 17:12:33.434262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c07f0 (107): Transport endpoint is not connected 00:20:18.655 [2024-07-20 17:12:33.435255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c07f0 (9): Bad file descriptor 00:20:18.655 [2024-07-20 17:12:33.436253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.655 [2024-07-20 17:12:33.436273] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.655 [2024-07-20 17:12:33.436302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.655 request: 00:20:18.655 { 00:20:18.655 "name": "TLSTEST", 00:20:18.655 "trtype": "tcp", 00:20:18.655 "traddr": "10.0.0.2", 00:20:18.655 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.655 "adrfam": "ipv4", 00:20:18.655 "trsvcid": "4420", 00:20:18.655 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.655 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:18.655 "method": "bdev_nvme_attach_controller", 00:20:18.655 "req_id": 1 00:20:18.655 } 00:20:18.655 Got JSON-RPC error response 00:20:18.655 response: 00:20:18.655 { 00:20:18.655 "code": -32602, 00:20:18.655 "message": "Invalid parameters" 00:20:18.655 } 00:20:18.655 17:12:33 -- target/tls.sh@36 -- # killprocess 567305 00:20:18.655 17:12:33 -- common/autotest_common.sh@926 -- # '[' -z 567305 ']' 00:20:18.655 17:12:33 -- common/autotest_common.sh@930 -- # kill -0 567305 00:20:18.655 17:12:33 -- common/autotest_common.sh@931 -- # uname 00:20:18.655 17:12:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.655 17:12:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 567305 00:20:18.655 17:12:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.655 17:12:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.655 17:12:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 567305' 00:20:18.655 killing process with pid 567305 00:20:18.655 17:12:33 -- common/autotest_common.sh@945 -- # kill 567305 00:20:18.655 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.655 00:20:18.655 Latency(us) 00:20:18.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.655 =================================================================================================================== 00:20:18.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.655 17:12:33 -- common/autotest_common.sh@950 -- # wait 567305 00:20:18.655 17:12:33 -- target/tls.sh@37 -- # return 1 00:20:18.655 17:12:33 -- common/autotest_common.sh@643 -- # es=1 00:20:18.655 17:12:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:18.655 17:12:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:18.655 17:12:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:18.655 17:12:33 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.655 17:12:33 -- common/autotest_common.sh@640 -- # local es=0 00:20:18.655 17:12:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.655 17:12:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:18.655 17:12:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.655 17:12:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:18.655 17:12:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.655 17:12:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.655 17:12:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.655 17:12:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.655 17:12:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:18.655 17:12:33 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:18.655 17:12:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.655 17:12:33 -- target/tls.sh@28 -- # bdevperf_pid=567331 00:20:18.655 17:12:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.655 17:12:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.655 17:12:33 -- target/tls.sh@31 -- # waitforlisten 567331 /var/tmp/bdevperf.sock 00:20:18.655 17:12:33 -- common/autotest_common.sh@819 -- # '[' -z 567331 ']' 00:20:18.655 17:12:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.655 17:12:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.655 17:12:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.655 17:12:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.655 17:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:18.655 [2024-07-20 17:12:33.724710] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:18.655 [2024-07-20 17:12:33.724820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567331 ] 00:20:18.655 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.655 [2024-07-20 17:12:33.791805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.655 [2024-07-20 17:12:33.882803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.655 17:12:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.655 17:12:34 -- common/autotest_common.sh@852 -- # return 0 00:20:18.655 17:12:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.913 [2024-07-20 17:12:34.920917] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.913 [2024-07-20 17:12:34.927077] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.913 [2024-07-20 17:12:34.927124] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.913 [2024-07-20 17:12:34.927176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.913 [2024-07-20 17:12:34.928165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255a7f0 (107): Transport endpoint is not connected 00:20:18.913 [2024-07-20 17:12:34.929141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255a7f0 (9): Bad file descriptor 00:20:18.913 [2024-07-20 17:12:34.930140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.913 [2024-07-20 17:12:34.930172] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.913 [2024-07-20 17:12:34.930186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.913 request: 00:20:18.913 { 00:20:18.913 "name": "TLSTEST", 00:20:18.913 "trtype": "tcp", 00:20:18.913 "traddr": "10.0.0.2", 00:20:18.913 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.913 "adrfam": "ipv4", 00:20:18.913 "trsvcid": "4420", 00:20:18.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.913 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:18.913 "method": "bdev_nvme_attach_controller", 00:20:18.913 "req_id": 1 00:20:18.913 } 00:20:18.913 Got JSON-RPC error response 00:20:18.913 response: 00:20:18.913 { 00:20:18.913 "code": -32602, 00:20:18.913 "message": "Invalid parameters" 00:20:18.913 } 00:20:18.913 17:12:34 -- target/tls.sh@36 -- # killprocess 567331 00:20:18.913 17:12:34 -- common/autotest_common.sh@926 -- # '[' -z 567331 ']' 00:20:18.913 17:12:34 -- common/autotest_common.sh@930 -- # kill -0 567331 00:20:18.914 17:12:34 -- common/autotest_common.sh@931 -- # uname 00:20:18.914 17:12:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.914 17:12:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 567331 00:20:18.914 17:12:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.914 17:12:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.914 17:12:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 567331' 00:20:18.914 killing process with pid 567331 00:20:18.914 17:12:34 -- common/autotest_common.sh@945 -- # kill 567331 00:20:18.914 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.914 00:20:18.914 Latency(us) 00:20:18.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.914 =================================================================================================================== 00:20:18.914 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.914 17:12:34 -- common/autotest_common.sh@950 -- # wait 567331 00:20:19.171 17:12:35 -- target/tls.sh@37 -- # return 1 00:20:19.171 17:12:35 -- common/autotest_common.sh@643 -- # es=1 00:20:19.172 17:12:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.172 17:12:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:19.172 17:12:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.172 17:12:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.172 17:12:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:19.172 17:12:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.172 17:12:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:19.172 17:12:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.172 17:12:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:19.172 17:12:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.172 17:12:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.172 17:12:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.172 17:12:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:19.172 17:12:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.172 17:12:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:19.172 17:12:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.172 17:12:35 -- target/tls.sh@28 -- # bdevperf_pid=567596 00:20:19.172 17:12:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.172 17:12:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.172 17:12:35 -- target/tls.sh@31 -- # waitforlisten 567596 /var/tmp/bdevperf.sock 00:20:19.172 17:12:35 -- common/autotest_common.sh@819 -- # '[' -z 567596 ']' 00:20:19.172 17:12:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.172 17:12:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.172 17:12:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.172 17:12:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.172 17:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:19.172 [2024-07-20 17:12:35.231351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:19.172 [2024-07-20 17:12:35.231428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567596 ] 00:20:19.172 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.172 [2024-07-20 17:12:35.290400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.429 [2024-07-20 17:12:35.374545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.363 17:12:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:20.363 17:12:36 -- common/autotest_common.sh@852 -- # return 0 00:20:20.363 17:12:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:20.363 [2024-07-20 17:12:36.419644] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.363 [2024-07-20 17:12:36.430910] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.363 [2024-07-20 17:12:36.430946] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:20.363 [2024-07-20 17:12:36.431002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:20.363 [2024-07-20 17:12:36.431790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efd7f0 (107): Transport endpoint is not connected 00:20:20.363 [2024-07-20 17:12:36.432766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1efd7f0 (9): Bad file descriptor 00:20:20.363 [2024-07-20 17:12:36.433765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:20.363 [2024-07-20 17:12:36.433804] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:20.363 [2024-07-20 17:12:36.433820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:20.363 request: 00:20:20.363 { 00:20:20.363 "name": "TLSTEST", 00:20:20.363 "trtype": "tcp", 00:20:20.363 "traddr": "10.0.0.2", 00:20:20.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.363 "adrfam": "ipv4", 00:20:20.363 "trsvcid": "4420", 00:20:20.363 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.363 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:20.363 "method": "bdev_nvme_attach_controller", 00:20:20.363 "req_id": 1 00:20:20.363 } 00:20:20.363 Got JSON-RPC error response 00:20:20.363 response: 00:20:20.363 { 00:20:20.363 "code": -32602, 00:20:20.363 "message": "Invalid parameters" 00:20:20.363 } 00:20:20.363 17:12:36 -- target/tls.sh@36 -- # killprocess 567596 00:20:20.363 17:12:36 -- common/autotest_common.sh@926 -- # '[' -z 567596 ']' 00:20:20.363 17:12:36 -- common/autotest_common.sh@930 -- # kill -0 567596 00:20:20.363 17:12:36 -- common/autotest_common.sh@931 -- # uname 00:20:20.363 17:12:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:20.363 17:12:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 567596 00:20:20.363 17:12:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:20.363 17:12:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:20.363 17:12:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 567596' 00:20:20.363 killing process with pid 567596 00:20:20.363 17:12:36 -- common/autotest_common.sh@945 -- # kill 567596 00:20:20.363 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.363 00:20:20.363 Latency(us) 00:20:20.363 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.363 =================================================================================================================== 00:20:20.363 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.363 17:12:36 -- common/autotest_common.sh@950 -- # wait 567596 00:20:20.621 17:12:36 -- target/tls.sh@37 -- # return 1 00:20:20.621 17:12:36 -- common/autotest_common.sh@643 -- # es=1 00:20:20.621 17:12:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:20.621 17:12:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:20.621 17:12:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:20.621 17:12:36 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.621 17:12:36 -- common/autotest_common.sh@640 -- # local es=0 00:20:20.621 17:12:36 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.621 17:12:36 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:20.621 17:12:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:20.621 17:12:36 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:20.621 17:12:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:20.621 17:12:36 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:20.621 17:12:36 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.621 17:12:36 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:20.621 17:12:36 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.621 17:12:36 -- target/tls.sh@23 -- # psk= 00:20:20.621 17:12:36 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.621 17:12:36 -- target/tls.sh@28 -- # bdevperf_pid=567751 00:20:20.621 17:12:36 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.621 17:12:36 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.621 17:12:36 -- target/tls.sh@31 -- # waitforlisten 567751 /var/tmp/bdevperf.sock 00:20:20.621 17:12:36 -- common/autotest_common.sh@819 -- # '[' -z 567751 ']' 00:20:20.621 17:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.621 17:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.621 17:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.621 17:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.621 17:12:36 -- common/autotest_common.sh@10 -- # set +x 00:20:20.621 [2024-07-20 17:12:36.738419] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:20.621 [2024-07-20 17:12:36.738493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567751 ] 00:20:20.621 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.879 [2024-07-20 17:12:36.797665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.879 [2024-07-20 17:12:36.882202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.812 17:12:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.812 17:12:37 -- common/autotest_common.sh@852 -- # return 0 00:20:21.812 17:12:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:21.812 [2024-07-20 17:12:37.945033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.812 [2024-07-20 17:12:37.946921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2116ec0 (9): Bad file descriptor 00:20:21.812 [2024-07-20 17:12:37.947916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.812 [2024-07-20 17:12:37.947937] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.812 [2024-07-20 17:12:37.947951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.812 request: 00:20:21.812 { 00:20:21.812 "name": "TLSTEST", 00:20:21.812 "trtype": "tcp", 00:20:21.812 "traddr": "10.0.0.2", 00:20:21.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.812 "adrfam": "ipv4", 00:20:21.812 "trsvcid": "4420", 00:20:21.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.812 "method": "bdev_nvme_attach_controller", 00:20:21.812 "req_id": 1 00:20:21.812 } 00:20:21.812 Got JSON-RPC error response 00:20:21.812 response: 00:20:21.812 { 00:20:21.812 "code": -32602, 00:20:21.812 "message": "Invalid parameters" 00:20:21.812 } 00:20:21.812 17:12:37 -- target/tls.sh@36 -- # killprocess 567751 00:20:21.812 17:12:37 -- common/autotest_common.sh@926 -- # '[' -z 567751 ']' 00:20:21.812 17:12:37 -- common/autotest_common.sh@930 -- # kill -0 567751 00:20:21.812 17:12:37 -- common/autotest_common.sh@931 -- # uname 00:20:21.812 17:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.070 17:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 567751 00:20:22.070 17:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:22.070 17:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:22.070 17:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 567751' 00:20:22.070 killing process with pid 567751 00:20:22.070 17:12:37 -- common/autotest_common.sh@945 -- # kill 567751 00:20:22.070 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.070 00:20:22.070 Latency(us) 00:20:22.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.070 =================================================================================================================== 00:20:22.070 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.070 17:12:37 -- common/autotest_common.sh@950 -- # wait 567751 00:20:22.070 17:12:38 -- target/tls.sh@37 -- # return 1 00:20:22.070 17:12:38 -- common/autotest_common.sh@643 -- # es=1 00:20:22.071 17:12:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:22.071 17:12:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:22.071 17:12:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:22.071 17:12:38 -- target/tls.sh@167 -- # killprocess 563474 00:20:22.071 17:12:38 -- common/autotest_common.sh@926 -- # '[' -z 563474 ']' 00:20:22.071 17:12:38 -- common/autotest_common.sh@930 -- # kill -0 563474 00:20:22.071 17:12:38 -- common/autotest_common.sh@931 -- # uname 00:20:22.071 17:12:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.071 17:12:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 563474 00:20:22.330 17:12:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:22.330 17:12:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:22.330 17:12:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 563474' 00:20:22.330 killing process with pid 563474 00:20:22.330 17:12:38 -- common/autotest_common.sh@945 -- # kill 563474 00:20:22.330 17:12:38 -- common/autotest_common.sh@950 -- # wait 563474 00:20:22.330 17:12:38 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:22.330 17:12:38 -- target/tls.sh@49 -- # local key hash crc 00:20:22.330 17:12:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:22.330 17:12:38 -- target/tls.sh@51 -- # hash=02 00:20:22.330 17:12:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:22.330 17:12:38 -- target/tls.sh@52 -- # gzip -1 -c 00:20:22.330 17:12:38 -- target/tls.sh@52 -- # tail -c8 00:20:22.330 17:12:38 -- target/tls.sh@52 -- # head -c 4 00:20:22.330 17:12:38 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:22.330 17:12:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:22.330 17:12:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:22.330 17:12:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.330 17:12:38 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.330 17:12:38 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.330 17:12:38 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:22.330 17:12:38 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.330 17:12:38 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:22.330 17:12:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:22.330 17:12:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:22.330 17:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.330 17:12:38 -- nvmf/common.sh@469 -- # nvmfpid=568038 00:20:22.330 17:12:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.330 17:12:38 -- nvmf/common.sh@470 -- # waitforlisten 568038 00:20:22.330 17:12:38 -- common/autotest_common.sh@819 -- # '[' -z 568038 ']' 00:20:22.330 17:12:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.330 17:12:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.330 17:12:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.330 17:12:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.330 17:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:22.589 [2024-07-20 17:12:38.525343] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:22.589 [2024-07-20 17:12:38.525415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.589 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.589 [2024-07-20 17:12:38.592279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.589 [2024-07-20 17:12:38.679422] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:22.589 [2024-07-20 17:12:38.679588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.589 [2024-07-20 17:12:38.679607] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.589 [2024-07-20 17:12:38.679622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.589 [2024-07-20 17:12:38.679655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.524 17:12:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.524 17:12:39 -- common/autotest_common.sh@852 -- # return 0 00:20:23.524 17:12:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:23.524 17:12:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:23.524 17:12:39 -- common/autotest_common.sh@10 -- # set +x 00:20:23.524 17:12:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.524 17:12:39 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.524 17:12:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.524 17:12:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.782 [2024-07-20 17:12:39.736506] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.782 17:12:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:24.039 17:12:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:24.298 [2024-07-20 17:12:40.205818] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.298 [2024-07-20 17:12:40.206088] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.298 17:12:40 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:24.298 malloc0 00:20:24.556 17:12:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:24.814 17:12:40 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.814 17:12:40 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.814 17:12:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.814 17:12:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.814 17:12:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:24.814 17:12:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:24.814 17:12:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.814 17:12:40 -- target/tls.sh@28 -- # bdevperf_pid=568338 00:20:24.814 17:12:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.814 17:12:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.814 17:12:40 -- target/tls.sh@31 -- # waitforlisten 568338 /var/tmp/bdevperf.sock 00:20:24.814 17:12:40 -- common/autotest_common.sh@819 -- # '[' -z 568338 ']' 00:20:24.814 17:12:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.814 17:12:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.814 17:12:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.814 17:12:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.814 17:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:25.072 [2024-07-20 17:12:40.994005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:25.072 [2024-07-20 17:12:40.994080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568338 ] 00:20:25.072 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.072 [2024-07-20 17:12:41.053051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.072 [2024-07-20 17:12:41.133566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.042 17:12:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:26.042 17:12:41 -- common/autotest_common.sh@852 -- # return 0 00:20:26.042 17:12:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:26.042 [2024-07-20 17:12:42.129908] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.298 TLSTESTn1 00:20:26.298 17:12:42 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.298 Running I/O for 10 seconds... 00:20:36.255 00:20:36.255 Latency(us) 00:20:36.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.255 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.255 Verification LBA range: start 0x0 length 0x2000 00:20:36.255 TLSTESTn1 : 10.06 974.91 3.81 0.00 0.00 130975.09 4781.70 165441.99 00:20:36.255 =================================================================================================================== 00:20:36.255 Total : 974.91 3.81 0.00 0.00 130975.09 4781.70 165441.99 00:20:36.255 0 00:20:36.512 17:12:52 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.512 17:12:52 -- target/tls.sh@45 -- # killprocess 568338 00:20:36.512 17:12:52 -- common/autotest_common.sh@926 -- # '[' -z 568338 ']' 00:20:36.512 17:12:52 -- common/autotest_common.sh@930 -- # kill -0 568338 00:20:36.512 17:12:52 -- common/autotest_common.sh@931 -- # uname 00:20:36.512 17:12:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:36.512 17:12:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 568338 00:20:36.512 17:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:36.512 17:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:36.512 17:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 568338' 00:20:36.512 killing process with pid 568338 00:20:36.512 17:12:52 -- common/autotest_common.sh@945 -- # kill 568338 00:20:36.512 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.512 00:20:36.512 Latency(us) 00:20:36.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.512 =================================================================================================================== 00:20:36.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.512 17:12:52 -- common/autotest_common.sh@950 -- # wait 568338 00:20:36.770 17:12:52 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:36.770 17:12:52 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:36.770 17:12:52 -- common/autotest_common.sh@640 -- # local es=0 00:20:36.770 17:12:52 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:36.770 17:12:52 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:36.770 17:12:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:36.770 17:12:52 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:36.770 17:12:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:36.770 17:12:52 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:36.770 17:12:52 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.770 17:12:52 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.770 17:12:52 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.770 17:12:52 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:36.770 17:12:52 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.770 17:12:52 -- target/tls.sh@28 -- # bdevperf_pid=569714 00:20:36.770 17:12:52 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.770 17:12:52 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.770 17:12:52 -- target/tls.sh@31 -- # waitforlisten 569714 /var/tmp/bdevperf.sock 00:20:36.770 17:12:52 -- common/autotest_common.sh@819 -- # '[' -z 569714 ']' 00:20:36.770 17:12:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.770 17:12:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.770 17:12:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.770 17:12:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.770 17:12:52 -- common/autotest_common.sh@10 -- # set +x 00:20:36.770 [2024-07-20 17:12:52.731714] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:36.770 [2024-07-20 17:12:52.731802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569714 ] 00:20:36.770 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.770 [2024-07-20 17:12:52.790370] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.770 [2024-07-20 17:12:52.873772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.728 17:12:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.728 17:12:53 -- common/autotest_common.sh@852 -- # return 0 00:20:37.728 17:12:53 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.985 [2024-07-20 17:12:53.958459] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.986 [2024-07-20 17:12:53.958510] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:37.986 request: 00:20:37.986 { 00:20:37.986 "name": "TLSTEST", 00:20:37.986 "trtype": "tcp", 00:20:37.986 "traddr": "10.0.0.2", 00:20:37.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.986 "adrfam": "ipv4", 00:20:37.986 "trsvcid": "4420", 00:20:37.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.986 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:37.986 "method": "bdev_nvme_attach_controller", 00:20:37.986 "req_id": 1 00:20:37.986 } 00:20:37.986 Got JSON-RPC error response 00:20:37.986 response: 00:20:37.986 { 00:20:37.986 "code": -22, 00:20:37.986 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:37.986 } 00:20:37.986 17:12:53 -- target/tls.sh@36 -- # killprocess 569714 00:20:37.986 17:12:53 -- common/autotest_common.sh@926 -- # '[' -z 569714 ']' 00:20:37.986 17:12:53 -- common/autotest_common.sh@930 -- # kill -0 569714 00:20:37.986 17:12:53 -- common/autotest_common.sh@931 -- # uname 00:20:37.986 17:12:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.986 17:12:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 569714 00:20:37.986 17:12:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.986 17:12:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.986 17:12:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 569714' 00:20:37.986 killing process with pid 569714 00:20:37.986 17:12:54 -- common/autotest_common.sh@945 -- # kill 569714 00:20:37.986 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.986 00:20:37.986 Latency(us) 00:20:37.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.986 =================================================================================================================== 00:20:37.986 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.986 17:12:54 -- common/autotest_common.sh@950 -- # wait 569714 00:20:38.243 17:12:54 -- target/tls.sh@37 -- # return 1 00:20:38.243 17:12:54 -- common/autotest_common.sh@643 -- # es=1 00:20:38.243 17:12:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:38.243 17:12:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:38.243 17:12:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:38.243 17:12:54 -- target/tls.sh@183 -- # killprocess 568038 00:20:38.243 17:12:54 -- common/autotest_common.sh@926 -- # '[' -z 568038 ']' 00:20:38.243 17:12:54 -- common/autotest_common.sh@930 -- # kill -0 568038 00:20:38.243 17:12:54 -- common/autotest_common.sh@931 -- # uname 00:20:38.243 17:12:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.243 17:12:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 568038 00:20:38.243 17:12:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:38.243 17:12:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:38.243 17:12:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 568038' 00:20:38.243 killing process with pid 568038 00:20:38.243 17:12:54 -- common/autotest_common.sh@945 -- # kill 568038 00:20:38.243 17:12:54 -- common/autotest_common.sh@950 -- # wait 568038 00:20:38.500 17:12:54 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:38.500 17:12:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:38.500 17:12:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:38.500 17:12:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.500 17:12:54 -- nvmf/common.sh@469 -- # nvmfpid=569992 00:20:38.500 17:12:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.500 17:12:54 -- nvmf/common.sh@470 -- # waitforlisten 569992 00:20:38.500 17:12:54 -- common/autotest_common.sh@819 -- # '[' -z 569992 ']' 00:20:38.500 17:12:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.500 17:12:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:38.500 17:12:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.500 17:12:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:38.500 17:12:54 -- common/autotest_common.sh@10 -- # set +x 00:20:38.500 [2024-07-20 17:12:54.527399] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:38.500 [2024-07-20 17:12:54.527485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.500 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.500 [2024-07-20 17:12:54.594724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.758 [2024-07-20 17:12:54.686519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:38.758 [2024-07-20 17:12:54.686680] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.758 [2024-07-20 17:12:54.686700] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.758 [2024-07-20 17:12:54.686715] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.758 [2024-07-20 17:12:54.686745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.690 17:12:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.690 17:12:55 -- common/autotest_common.sh@852 -- # return 0 00:20:39.690 17:12:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:39.690 17:12:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:39.690 17:12:55 -- common/autotest_common.sh@10 -- # set +x 00:20:39.690 17:12:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.690 17:12:55 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.690 17:12:55 -- common/autotest_common.sh@640 -- # local es=0 00:20:39.690 17:12:55 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.690 17:12:55 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:39.690 17:12:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:39.690 17:12:55 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:39.690 17:12:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:39.690 17:12:55 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.690 17:12:55 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:39.690 17:12:55 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.690 [2024-07-20 17:12:55.752703] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.690 17:12:55 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.947 17:12:55 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.205 [2024-07-20 17:12:56.209971] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.205 [2024-07-20 17:12:56.210239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.205 17:12:56 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.463 malloc0 00:20:40.463 17:12:56 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.720 17:12:56 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.978 [2024-07-20 17:12:56.951998] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:40.978 [2024-07-20 17:12:56.952043] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:40.978 [2024-07-20 17:12:56.952069] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:40.978 request: 00:20:40.978 { 00:20:40.978 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.978 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.978 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:40.978 "method": "nvmf_subsystem_add_host", 00:20:40.978 "req_id": 1 00:20:40.978 } 00:20:40.978 Got JSON-RPC error response 00:20:40.978 response: 00:20:40.978 { 00:20:40.978 "code": -32603, 00:20:40.978 "message": "Internal error" 00:20:40.978 } 00:20:40.978 17:12:56 -- common/autotest_common.sh@643 -- # es=1 00:20:40.978 17:12:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:40.978 17:12:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:40.978 17:12:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:40.978 17:12:56 -- target/tls.sh@189 -- # killprocess 569992 00:20:40.978 17:12:56 -- common/autotest_common.sh@926 -- # '[' -z 569992 ']' 00:20:40.978 17:12:56 -- common/autotest_common.sh@930 -- # kill -0 569992 00:20:40.978 17:12:56 -- common/autotest_common.sh@931 -- # uname 00:20:40.978 17:12:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:40.978 17:12:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 569992 00:20:40.978 17:12:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:40.978 17:12:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:40.978 17:12:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 569992' 00:20:40.978 killing process with pid 569992 00:20:40.978 17:12:57 -- common/autotest_common.sh@945 -- # kill 569992 00:20:40.978 17:12:57 -- common/autotest_common.sh@950 -- # wait 569992 00:20:41.235 17:12:57 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.235 17:12:57 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:41.235 17:12:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:41.235 17:12:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:41.235 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.235 17:12:57 -- nvmf/common.sh@469 -- # nvmfpid=570301 00:20:41.235 17:12:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:41.235 17:12:57 -- nvmf/common.sh@470 -- # waitforlisten 570301 00:20:41.235 17:12:57 -- common/autotest_common.sh@819 -- # '[' -z 570301 ']' 00:20:41.235 17:12:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.235 17:12:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:41.235 17:12:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.235 17:12:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:41.235 17:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:41.235 [2024-07-20 17:12:57.312174] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:41.235 [2024-07-20 17:12:57.312273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.235 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.235 [2024-07-20 17:12:57.381446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.492 [2024-07-20 17:12:57.467661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:41.492 [2024-07-20 17:12:57.467848] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.492 [2024-07-20 17:12:57.467871] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.492 [2024-07-20 17:12:57.467886] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.492 [2024-07-20 17:12:57.467917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.423 17:12:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.423 17:12:58 -- common/autotest_common.sh@852 -- # return 0 00:20:42.423 17:12:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:42.423 17:12:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:42.423 17:12:58 -- common/autotest_common.sh@10 -- # set +x 00:20:42.423 17:12:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.423 17:12:58 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.423 17:12:58 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.423 17:12:58 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.423 [2024-07-20 17:12:58.571298] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.680 17:12:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.938 17:12:58 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.938 [2024-07-20 17:12:59.056598] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.938 [2024-07-20 17:12:59.056857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.938 17:12:59 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:43.197 malloc0 00:20:43.197 17:12:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:43.454 17:12:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:43.712 17:12:59 -- target/tls.sh@197 -- # bdevperf_pid=570650 00:20:43.712 17:12:59 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.712 17:12:59 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.712 17:12:59 -- target/tls.sh@200 -- # waitforlisten 570650 /var/tmp/bdevperf.sock 00:20:43.712 17:12:59 -- common/autotest_common.sh@819 -- # '[' -z 570650 ']' 00:20:43.712 17:12:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.712 17:12:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:43.712 17:12:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.712 17:12:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:43.712 17:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:43.712 [2024-07-20 17:12:59.824282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:43.712 [2024-07-20 17:12:59.824357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid570650 ] 00:20:43.712 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.971 [2024-07-20 17:12:59.886194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.971 [2024-07-20 17:12:59.972916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.905 17:13:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:44.905 17:13:00 -- common/autotest_common.sh@852 -- # return 0 00:20:44.905 17:13:00 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.905 [2024-07-20 17:13:01.061140] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.163 TLSTESTn1 00:20:45.164 17:13:01 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:45.422 17:13:01 -- target/tls.sh@205 -- # tgtconf='{ 00:20:45.422 "subsystems": [ 00:20:45.422 { 00:20:45.422 "subsystem": "iobuf", 00:20:45.422 "config": [ 00:20:45.422 { 00:20:45.422 "method": "iobuf_set_options", 00:20:45.422 "params": { 00:20:45.422 "small_pool_count": 8192, 00:20:45.422 "large_pool_count": 1024, 00:20:45.422 "small_bufsize": 8192, 00:20:45.422 "large_bufsize": 135168 00:20:45.422 } 00:20:45.422 } 00:20:45.422 ] 00:20:45.422 }, 00:20:45.422 { 00:20:45.422 "subsystem": "sock", 00:20:45.422 "config": [ 00:20:45.422 { 00:20:45.422 "method": "sock_impl_set_options", 00:20:45.422 "params": { 00:20:45.422 "impl_name": "posix", 00:20:45.422 "recv_buf_size": 2097152, 00:20:45.422 "send_buf_size": 2097152, 00:20:45.422 "enable_recv_pipe": true, 00:20:45.422 "enable_quickack": false, 00:20:45.422 "enable_placement_id": 0, 00:20:45.422 "enable_zerocopy_send_server": true, 00:20:45.422 "enable_zerocopy_send_client": false, 00:20:45.422 "zerocopy_threshold": 0, 00:20:45.422 "tls_version": 0, 00:20:45.422 "enable_ktls": false 00:20:45.422 } 00:20:45.422 }, 00:20:45.422 { 00:20:45.422 "method": "sock_impl_set_options", 00:20:45.422 "params": { 00:20:45.422 "impl_name": "ssl", 00:20:45.422 "recv_buf_size": 4096, 00:20:45.422 "send_buf_size": 4096, 00:20:45.422 "enable_recv_pipe": true, 00:20:45.422 "enable_quickack": false, 00:20:45.422 "enable_placement_id": 0, 00:20:45.422 "enable_zerocopy_send_server": true, 00:20:45.422 "enable_zerocopy_send_client": false, 00:20:45.422 "zerocopy_threshold": 0, 00:20:45.422 "tls_version": 0, 00:20:45.422 "enable_ktls": false 00:20:45.422 } 00:20:45.422 } 00:20:45.422 ] 00:20:45.422 }, 00:20:45.422 { 00:20:45.422 "subsystem": "vmd", 00:20:45.422 "config": [] 00:20:45.422 }, 00:20:45.422 { 00:20:45.422 "subsystem": "accel", 00:20:45.422 "config": [ 00:20:45.422 { 00:20:45.422 "method": "accel_set_options", 00:20:45.422 "params": { 00:20:45.422 "small_cache_size": 128, 00:20:45.422 "large_cache_size": 16, 00:20:45.422 "task_count": 2048, 00:20:45.422 "sequence_count": 2048, 00:20:45.422 "buf_count": 2048 00:20:45.422 } 00:20:45.422 } 00:20:45.422 ] 00:20:45.422 }, 00:20:45.422 { 00:20:45.422 "subsystem": "bdev", 00:20:45.422 "config": [ 00:20:45.422 { 00:20:45.422 "method": "bdev_set_options", 00:20:45.422 "params": { 00:20:45.422 "bdev_io_pool_size": 65535, 00:20:45.422 "bdev_io_cache_size": 256, 00:20:45.423 "bdev_auto_examine": true, 00:20:45.423 "iobuf_small_cache_size": 128, 00:20:45.423 "iobuf_large_cache_size": 16 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "bdev_raid_set_options", 00:20:45.423 "params": { 00:20:45.423 "process_window_size_kb": 1024 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "bdev_iscsi_set_options", 00:20:45.423 "params": { 00:20:45.423 "timeout_sec": 30 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "bdev_nvme_set_options", 00:20:45.423 "params": { 00:20:45.423 "action_on_timeout": "none", 00:20:45.423 "timeout_us": 0, 00:20:45.423 "timeout_admin_us": 0, 00:20:45.423 "keep_alive_timeout_ms": 10000, 00:20:45.423 "transport_retry_count": 4, 00:20:45.423 "arbitration_burst": 0, 00:20:45.423 "low_priority_weight": 0, 00:20:45.423 "medium_priority_weight": 0, 00:20:45.423 "high_priority_weight": 0, 00:20:45.423 "nvme_adminq_poll_period_us": 10000, 00:20:45.423 "nvme_ioq_poll_period_us": 0, 00:20:45.423 "io_queue_requests": 0, 00:20:45.423 "delay_cmd_submit": true, 00:20:45.423 "bdev_retry_count": 3, 00:20:45.423 "transport_ack_timeout": 0, 00:20:45.423 "ctrlr_loss_timeout_sec": 0, 00:20:45.423 "reconnect_delay_sec": 0, 00:20:45.423 "fast_io_fail_timeout_sec": 0, 00:20:45.423 "generate_uuids": false, 00:20:45.423 "transport_tos": 0, 00:20:45.423 "io_path_stat": false, 00:20:45.423 "allow_accel_sequence": false 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "bdev_nvme_set_hotplug", 00:20:45.423 "params": { 00:20:45.423 "period_us": 100000, 00:20:45.423 "enable": false 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "bdev_malloc_create", 00:20:45.423 "params": { 00:20:45.423 "name": "malloc0", 00:20:45.423 "num_blocks": 8192, 00:20:45.423 "block_size": 4096, 00:20:45.423 "physical_block_size": 4096, 00:20:45.423 "uuid": "87878cc7-bdc4-4793-8141-46f1093d9a4f", 00:20:45.423 "optimal_io_boundary": 0 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "bdev_wait_for_examine" 00:20:45.423 } 00:20:45.423 ] 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "subsystem": "nbd", 00:20:45.423 "config": [] 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "subsystem": "scheduler", 00:20:45.423 "config": [ 00:20:45.423 { 00:20:45.423 "method": "framework_set_scheduler", 00:20:45.423 "params": { 00:20:45.423 "name": "static" 00:20:45.423 } 00:20:45.423 } 00:20:45.423 ] 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "subsystem": "nvmf", 00:20:45.423 "config": [ 00:20:45.423 { 00:20:45.423 "method": "nvmf_set_config", 00:20:45.423 "params": { 00:20:45.423 "discovery_filter": "match_any", 00:20:45.423 "admin_cmd_passthru": { 00:20:45.423 "identify_ctrlr": false 00:20:45.423 } 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_set_max_subsystems", 00:20:45.423 "params": { 00:20:45.423 "max_subsystems": 1024 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_set_crdt", 00:20:45.423 "params": { 00:20:45.423 "crdt1": 0, 00:20:45.423 "crdt2": 0, 00:20:45.423 "crdt3": 0 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_create_transport", 00:20:45.423 "params": { 00:20:45.423 "trtype": "TCP", 00:20:45.423 "max_queue_depth": 128, 00:20:45.423 "max_io_qpairs_per_ctrlr": 127, 00:20:45.423 "in_capsule_data_size": 4096, 00:20:45.423 "max_io_size": 131072, 00:20:45.423 "io_unit_size": 131072, 00:20:45.423 "max_aq_depth": 128, 00:20:45.423 "num_shared_buffers": 511, 00:20:45.423 "buf_cache_size": 4294967295, 00:20:45.423 "dif_insert_or_strip": false, 00:20:45.423 "zcopy": false, 00:20:45.423 "c2h_success": false, 00:20:45.423 "sock_priority": 0, 00:20:45.423 "abort_timeout_sec": 1 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_create_subsystem", 00:20:45.423 "params": { 00:20:45.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.423 "allow_any_host": false, 00:20:45.423 "serial_number": "SPDK00000000000001", 00:20:45.423 "model_number": "SPDK bdev Controller", 00:20:45.423 "max_namespaces": 10, 00:20:45.423 "min_cntlid": 1, 00:20:45.423 "max_cntlid": 65519, 00:20:45.423 "ana_reporting": false 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_subsystem_add_host", 00:20:45.423 "params": { 00:20:45.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.423 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.423 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_subsystem_add_ns", 00:20:45.423 "params": { 00:20:45.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.423 "namespace": { 00:20:45.423 "nsid": 1, 00:20:45.423 "bdev_name": "malloc0", 00:20:45.423 "nguid": "87878CC7BDC44793814146F1093D9A4F", 00:20:45.423 "uuid": "87878cc7-bdc4-4793-8141-46f1093d9a4f" 00:20:45.423 } 00:20:45.423 } 00:20:45.423 }, 00:20:45.423 { 00:20:45.423 "method": "nvmf_subsystem_add_listener", 00:20:45.423 "params": { 00:20:45.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.423 "listen_address": { 00:20:45.423 "trtype": "TCP", 00:20:45.423 "adrfam": "IPv4", 00:20:45.423 "traddr": "10.0.0.2", 00:20:45.423 "trsvcid": "4420" 00:20:45.423 }, 00:20:45.423 "secure_channel": true 00:20:45.423 } 00:20:45.423 } 00:20:45.423 ] 00:20:45.423 } 00:20:45.423 ] 00:20:45.423 }' 00:20:45.423 17:13:01 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:45.688 17:13:01 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:45.688 "subsystems": [ 00:20:45.688 { 00:20:45.688 "subsystem": "iobuf", 00:20:45.688 "config": [ 00:20:45.688 { 00:20:45.688 "method": "iobuf_set_options", 00:20:45.688 "params": { 00:20:45.688 "small_pool_count": 8192, 00:20:45.688 "large_pool_count": 1024, 00:20:45.688 "small_bufsize": 8192, 00:20:45.688 "large_bufsize": 135168 00:20:45.688 } 00:20:45.688 } 00:20:45.688 ] 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "subsystem": "sock", 00:20:45.688 "config": [ 00:20:45.688 { 00:20:45.688 "method": "sock_impl_set_options", 00:20:45.688 "params": { 00:20:45.688 "impl_name": "posix", 00:20:45.688 "recv_buf_size": 2097152, 00:20:45.688 "send_buf_size": 2097152, 00:20:45.688 "enable_recv_pipe": true, 00:20:45.688 "enable_quickack": false, 00:20:45.688 "enable_placement_id": 0, 00:20:45.688 "enable_zerocopy_send_server": true, 00:20:45.688 "enable_zerocopy_send_client": false, 00:20:45.688 "zerocopy_threshold": 0, 00:20:45.688 "tls_version": 0, 00:20:45.688 "enable_ktls": false 00:20:45.688 } 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "method": "sock_impl_set_options", 00:20:45.688 "params": { 00:20:45.688 "impl_name": "ssl", 00:20:45.688 "recv_buf_size": 4096, 00:20:45.688 "send_buf_size": 4096, 00:20:45.688 "enable_recv_pipe": true, 00:20:45.688 "enable_quickack": false, 00:20:45.688 "enable_placement_id": 0, 00:20:45.688 "enable_zerocopy_send_server": true, 00:20:45.688 "enable_zerocopy_send_client": false, 00:20:45.688 "zerocopy_threshold": 0, 00:20:45.688 "tls_version": 0, 00:20:45.688 "enable_ktls": false 00:20:45.688 } 00:20:45.688 } 00:20:45.688 ] 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "subsystem": "vmd", 00:20:45.688 "config": [] 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "subsystem": "accel", 00:20:45.688 "config": [ 00:20:45.688 { 00:20:45.688 "method": "accel_set_options", 00:20:45.688 "params": { 00:20:45.688 "small_cache_size": 128, 00:20:45.688 "large_cache_size": 16, 00:20:45.688 "task_count": 2048, 00:20:45.688 "sequence_count": 2048, 00:20:45.688 "buf_count": 2048 00:20:45.688 } 00:20:45.688 } 00:20:45.688 ] 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "subsystem": "bdev", 00:20:45.688 "config": [ 00:20:45.688 { 00:20:45.688 "method": "bdev_set_options", 00:20:45.688 "params": { 00:20:45.688 "bdev_io_pool_size": 65535, 00:20:45.688 "bdev_io_cache_size": 256, 00:20:45.688 "bdev_auto_examine": true, 00:20:45.688 "iobuf_small_cache_size": 128, 00:20:45.688 "iobuf_large_cache_size": 16 00:20:45.688 } 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "method": "bdev_raid_set_options", 00:20:45.688 "params": { 00:20:45.688 "process_window_size_kb": 1024 00:20:45.688 } 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "method": "bdev_iscsi_set_options", 00:20:45.688 "params": { 00:20:45.688 "timeout_sec": 30 00:20:45.688 } 00:20:45.688 }, 00:20:45.688 { 00:20:45.688 "method": "bdev_nvme_set_options", 00:20:45.688 "params": { 00:20:45.688 "action_on_timeout": "none", 00:20:45.688 "timeout_us": 0, 00:20:45.688 "timeout_admin_us": 0, 00:20:45.688 "keep_alive_timeout_ms": 10000, 00:20:45.688 "transport_retry_count": 4, 00:20:45.688 "arbitration_burst": 0, 00:20:45.688 "low_priority_weight": 0, 00:20:45.688 "medium_priority_weight": 0, 00:20:45.688 "high_priority_weight": 0, 00:20:45.689 "nvme_adminq_poll_period_us": 10000, 00:20:45.689 "nvme_ioq_poll_period_us": 0, 00:20:45.689 "io_queue_requests": 512, 00:20:45.689 "delay_cmd_submit": true, 00:20:45.689 "bdev_retry_count": 3, 00:20:45.689 "transport_ack_timeout": 0, 00:20:45.689 "ctrlr_loss_timeout_sec": 0, 00:20:45.689 "reconnect_delay_sec": 0, 00:20:45.689 "fast_io_fail_timeout_sec": 0, 00:20:45.689 "generate_uuids": false, 00:20:45.689 "transport_tos": 0, 00:20:45.689 "io_path_stat": false, 00:20:45.689 "allow_accel_sequence": false 00:20:45.689 } 00:20:45.689 }, 00:20:45.689 { 00:20:45.689 "method": "bdev_nvme_attach_controller", 00:20:45.689 "params": { 00:20:45.689 "name": "TLSTEST", 00:20:45.689 "trtype": "TCP", 00:20:45.689 "adrfam": "IPv4", 00:20:45.689 "traddr": "10.0.0.2", 00:20:45.689 "trsvcid": "4420", 00:20:45.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.689 "prchk_reftag": false, 00:20:45.689 "prchk_guard": false, 00:20:45.689 "ctrlr_loss_timeout_sec": 0, 00:20:45.689 "reconnect_delay_sec": 0, 00:20:45.689 "fast_io_fail_timeout_sec": 0, 00:20:45.689 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:45.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.689 "hdgst": false, 00:20:45.689 "ddgst": false 00:20:45.689 } 00:20:45.689 }, 00:20:45.689 { 00:20:45.689 "method": "bdev_nvme_set_hotplug", 00:20:45.689 "params": { 00:20:45.689 "period_us": 100000, 00:20:45.689 "enable": false 00:20:45.689 } 00:20:45.689 }, 00:20:45.689 { 00:20:45.689 "method": "bdev_wait_for_examine" 00:20:45.689 } 00:20:45.689 ] 00:20:45.689 }, 00:20:45.689 { 00:20:45.689 "subsystem": "nbd", 00:20:45.689 "config": [] 00:20:45.689 } 00:20:45.689 ] 00:20:45.689 }' 00:20:45.689 17:13:01 -- target/tls.sh@208 -- # killprocess 570650 00:20:45.689 17:13:01 -- common/autotest_common.sh@926 -- # '[' -z 570650 ']' 00:20:45.689 17:13:01 -- common/autotest_common.sh@930 -- # kill -0 570650 00:20:45.689 17:13:01 -- common/autotest_common.sh@931 -- # uname 00:20:45.689 17:13:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.689 17:13:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 570650 00:20:45.689 17:13:01 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:45.689 17:13:01 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:45.689 17:13:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 570650' 00:20:45.689 killing process with pid 570650 00:20:45.689 17:13:01 -- common/autotest_common.sh@945 -- # kill 570650 00:20:45.689 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.689 00:20:45.689 Latency(us) 00:20:45.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.689 =================================================================================================================== 00:20:45.689 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.689 17:13:01 -- common/autotest_common.sh@950 -- # wait 570650 00:20:45.978 17:13:01 -- target/tls.sh@209 -- # killprocess 570301 00:20:45.978 17:13:01 -- common/autotest_common.sh@926 -- # '[' -z 570301 ']' 00:20:45.978 17:13:01 -- common/autotest_common.sh@930 -- # kill -0 570301 00:20:45.978 17:13:01 -- common/autotest_common.sh@931 -- # uname 00:20:45.978 17:13:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.978 17:13:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 570301 00:20:45.978 17:13:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:45.978 17:13:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:45.978 17:13:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 570301' 00:20:45.978 killing process with pid 570301 00:20:45.978 17:13:02 -- common/autotest_common.sh@945 -- # kill 570301 00:20:45.978 17:13:02 -- common/autotest_common.sh@950 -- # wait 570301 00:20:46.237 17:13:02 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:46.237 17:13:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:46.237 17:13:02 -- target/tls.sh@212 -- # echo '{ 00:20:46.237 "subsystems": [ 00:20:46.237 { 00:20:46.237 "subsystem": "iobuf", 00:20:46.237 "config": [ 00:20:46.237 { 00:20:46.237 "method": "iobuf_set_options", 00:20:46.237 "params": { 00:20:46.237 "small_pool_count": 8192, 00:20:46.237 "large_pool_count": 1024, 00:20:46.237 "small_bufsize": 8192, 00:20:46.237 "large_bufsize": 135168 00:20:46.237 } 00:20:46.237 } 00:20:46.237 ] 00:20:46.237 }, 00:20:46.237 { 00:20:46.237 "subsystem": "sock", 00:20:46.237 "config": [ 00:20:46.237 { 00:20:46.237 "method": "sock_impl_set_options", 00:20:46.237 "params": { 00:20:46.237 "impl_name": "posix", 00:20:46.237 "recv_buf_size": 2097152, 00:20:46.237 "send_buf_size": 2097152, 00:20:46.237 "enable_recv_pipe": true, 00:20:46.237 "enable_quickack": false, 00:20:46.237 "enable_placement_id": 0, 00:20:46.237 "enable_zerocopy_send_server": true, 00:20:46.237 "enable_zerocopy_send_client": false, 00:20:46.237 "zerocopy_threshold": 0, 00:20:46.237 "tls_version": 0, 00:20:46.237 "enable_ktls": false 00:20:46.237 } 00:20:46.237 }, 00:20:46.237 { 00:20:46.237 "method": "sock_impl_set_options", 00:20:46.237 "params": { 00:20:46.237 "impl_name": "ssl", 00:20:46.237 "recv_buf_size": 4096, 00:20:46.237 "send_buf_size": 4096, 00:20:46.238 "enable_recv_pipe": true, 00:20:46.238 "enable_quickack": false, 00:20:46.238 "enable_placement_id": 0, 00:20:46.238 "enable_zerocopy_send_server": true, 00:20:46.238 "enable_zerocopy_send_client": false, 00:20:46.238 "zerocopy_threshold": 0, 00:20:46.238 "tls_version": 0, 00:20:46.238 "enable_ktls": false 00:20:46.238 } 00:20:46.238 } 00:20:46.238 ] 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "subsystem": "vmd", 00:20:46.238 "config": [] 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "subsystem": "accel", 00:20:46.238 "config": [ 00:20:46.238 { 00:20:46.238 "method": "accel_set_options", 00:20:46.238 "params": { 00:20:46.238 "small_cache_size": 128, 00:20:46.238 "large_cache_size": 16, 00:20:46.238 "task_count": 2048, 00:20:46.238 "sequence_count": 2048, 00:20:46.238 "buf_count": 2048 00:20:46.238 } 00:20:46.238 } 00:20:46.238 ] 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "subsystem": "bdev", 00:20:46.238 "config": [ 00:20:46.238 { 00:20:46.238 "method": "bdev_set_options", 00:20:46.238 "params": { 00:20:46.238 "bdev_io_pool_size": 65535, 00:20:46.238 "bdev_io_cache_size": 256, 00:20:46.238 "bdev_auto_examine": true, 00:20:46.238 "iobuf_small_cache_size": 128, 00:20:46.238 "iobuf_large_cache_size": 16 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "bdev_raid_set_options", 00:20:46.238 "params": { 00:20:46.238 "process_window_size_kb": 1024 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "bdev_iscsi_set_options", 00:20:46.238 "params": { 00:20:46.238 "timeout_sec": 30 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "bdev_nvme_set_options", 00:20:46.238 "params": { 00:20:46.238 "action_on_timeout": "none", 00:20:46.238 "timeout_us": 0, 00:20:46.238 "timeout_admin_us": 0, 00:20:46.238 "keep_alive_timeout_ms": 10000, 00:20:46.238 "transport_retry_count": 4, 00:20:46.238 "arbitration_burst": 0, 00:20:46.238 "low_priority_weight": 0, 00:20:46.238 "medium_priority_weight": 0, 00:20:46.238 "high_priority_weight": 0, 00:20:46.238 "nvme_adminq_poll_period_us": 10000, 00:20:46.238 "nvme_ioq_poll_period_us": 0, 00:20:46.238 "io_queue_requests": 0, 00:20:46.238 "delay_cmd_submit": true, 00:20:46.238 "bdev_retry_count": 3, 00:20:46.238 "transport_ack_timeout": 0, 00:20:46.238 "ctrlr_loss_timeout_sec": 0, 00:20:46.238 "reconnect_delay_sec": 0, 00:20:46.238 "fast_io_fail_timeout_sec": 0, 00:20:46.238 "generate_uuids": false, 00:20:46.238 "transport_tos": 0, 00:20:46.238 "io_path_stat": false, 00:20:46.238 "allow_accel_sequence": false 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "bdev_nvme_set_hotplug", 00:20:46.238 "params": { 00:20:46.238 "period_us": 100000, 00:20:46.238 "enable": false 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "bdev_malloc_create", 00:20:46.238 "params": { 00:20:46.238 "name": "malloc0", 00:20:46.238 "num_blocks": 8192, 00:20:46.238 "block_size": 4096, 00:20:46.238 "physical_block_size": 4096, 00:20:46.238 "uuid": "87878cc7-bdc4-4793-8141-46f1093d9a4f", 00:20:46.238 "optimal_io_boundary": 0 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "bdev_wait_for_examine" 00:20:46.238 } 00:20:46.238 ] 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "subsystem": "nbd", 00:20:46.238 "config": [] 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "subsystem": "scheduler", 00:20:46.238 "config": [ 00:20:46.238 { 00:20:46.238 "method": "framework_set_scheduler", 00:20:46.238 "params": { 00:20:46.238 "name": "static" 00:20:46.238 } 00:20:46.238 } 00:20:46.238 ] 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "subsystem": "nvmf", 00:20:46.238 "config": [ 00:20:46.238 { 00:20:46.238 "method": "nvmf_set_config", 00:20:46.238 "params": { 00:20:46.238 "discovery_filter": "match_any", 00:20:46.238 "admin_cmd_passthru": { 00:20:46.238 "identify_ctrlr": false 00:20:46.238 } 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_set_max_subsystems", 00:20:46.238 "params": { 00:20:46.238 "max_subsystems": 1024 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_set_crdt", 00:20:46.238 "params": { 00:20:46.238 "crdt1": 0, 00:20:46.238 "crdt2": 0, 00:20:46.238 "crdt3": 0 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_create_transport", 00:20:46.238 "params": { 00:20:46.238 "trtype": "TCP", 00:20:46.238 "max_queue_depth": 128, 00:20:46.238 "max_io_qpairs_per_ctrlr": 127, 00:20:46.238 "in_capsule_data_size": 4096, 00:20:46.238 "max_io_size": 131072, 00:20:46.238 "io_unit_size": 131072, 00:20:46.238 "max_aq_depth": 128, 00:20:46.238 "num_shared_buffers": 511, 00:20:46.238 "buf_cache_size": 4294967295, 00:20:46.238 "dif_insert_or_strip": false, 00:20:46.238 "zcopy": false, 00:20:46.238 "c2h_success": false, 00:20:46.238 "sock_priority": 0, 00:20:46.238 "abort_timeout_sec": 1 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_create_subsystem", 00:20:46.238 "params": { 00:20:46.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.238 "allow_any_host": false, 00:20:46.238 "serial_number": "SPDK00000000000001", 00:20:46.238 "model_number": "SPDK bdev Controller", 00:20:46.238 "max_namespaces": 10, 00:20:46.238 "min_cntlid": 1, 00:20:46.238 "max_cntlid": 65519, 00:20:46.238 "ana_reporting": false 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_subsystem_add_host", 00:20:46.238 "params": { 00:20:46.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.238 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.238 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_subsystem_add_ns", 00:20:46.238 "params": { 00:20:46.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.238 "namespace": { 00:20:46.238 "nsid": 1, 00:20:46.238 "bdev_name": "malloc0", 00:20:46.238 "nguid": "87878CC7BDC44793814146F1093D9A4F", 00:20:46.238 "uuid": "87878cc7-bdc4-4793-8141-46f1093d9a4f" 00:20:46.238 } 00:20:46.238 } 00:20:46.238 }, 00:20:46.238 { 00:20:46.238 "method": "nvmf_subsystem_add_listener", 00:20:46.238 "params": { 00:20:46.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.238 "listen_address": { 00:20:46.238 "trtype": "TCP", 00:20:46.238 "adrfam": "IPv4", 00:20:46.238 "traddr": "10.0.0.2", 00:20:46.238 "trsvcid": "4420" 00:20:46.238 }, 00:20:46.238 "secure_channel": true 00:20:46.238 } 00:20:46.238 } 00:20:46.238 ] 00:20:46.238 } 00:20:46.238 ] 00:20:46.238 }' 00:20:46.238 17:13:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:46.238 17:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:46.238 17:13:02 -- nvmf/common.sh@469 -- # nvmfpid=571019 00:20:46.238 17:13:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:46.238 17:13:02 -- nvmf/common.sh@470 -- # waitforlisten 571019 00:20:46.238 17:13:02 -- common/autotest_common.sh@819 -- # '[' -z 571019 ']' 00:20:46.238 17:13:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.238 17:13:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.238 17:13:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.238 17:13:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.238 17:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:46.238 [2024-07-20 17:13:02.312525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:46.238 [2024-07-20 17:13:02.312609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.238 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.238 [2024-07-20 17:13:02.384267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.497 [2024-07-20 17:13:02.477472] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:46.497 [2024-07-20 17:13:02.477636] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.497 [2024-07-20 17:13:02.477658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.497 [2024-07-20 17:13:02.477673] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.497 [2024-07-20 17:13:02.477704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.756 [2024-07-20 17:13:02.697530] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.756 [2024-07-20 17:13:02.729555] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.756 [2024-07-20 17:13:02.729818] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.321 17:13:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:47.321 17:13:03 -- common/autotest_common.sh@852 -- # return 0 00:20:47.321 17:13:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:47.321 17:13:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:47.321 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.321 17:13:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.321 17:13:03 -- target/tls.sh@216 -- # bdevperf_pid=571171 00:20:47.321 17:13:03 -- target/tls.sh@217 -- # waitforlisten 571171 /var/tmp/bdevperf.sock 00:20:47.321 17:13:03 -- common/autotest_common.sh@819 -- # '[' -z 571171 ']' 00:20:47.321 17:13:03 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:47.321 17:13:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.321 17:13:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:47.321 17:13:03 -- target/tls.sh@213 -- # echo '{ 00:20:47.321 "subsystems": [ 00:20:47.321 { 00:20:47.321 "subsystem": "iobuf", 00:20:47.321 "config": [ 00:20:47.321 { 00:20:47.321 "method": "iobuf_set_options", 00:20:47.321 "params": { 00:20:47.321 "small_pool_count": 8192, 00:20:47.321 "large_pool_count": 1024, 00:20:47.321 "small_bufsize": 8192, 00:20:47.321 "large_bufsize": 135168 00:20:47.321 } 00:20:47.321 } 00:20:47.321 ] 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "subsystem": "sock", 00:20:47.321 "config": [ 00:20:47.321 { 00:20:47.321 "method": "sock_impl_set_options", 00:20:47.321 "params": { 00:20:47.321 "impl_name": "posix", 00:20:47.321 "recv_buf_size": 2097152, 00:20:47.321 "send_buf_size": 2097152, 00:20:47.321 "enable_recv_pipe": true, 00:20:47.321 "enable_quickack": false, 00:20:47.321 "enable_placement_id": 0, 00:20:47.321 "enable_zerocopy_send_server": true, 00:20:47.321 "enable_zerocopy_send_client": false, 00:20:47.321 "zerocopy_threshold": 0, 00:20:47.321 "tls_version": 0, 00:20:47.321 "enable_ktls": false 00:20:47.321 } 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "method": "sock_impl_set_options", 00:20:47.321 "params": { 00:20:47.321 "impl_name": "ssl", 00:20:47.321 "recv_buf_size": 4096, 00:20:47.321 "send_buf_size": 4096, 00:20:47.321 "enable_recv_pipe": true, 00:20:47.321 "enable_quickack": false, 00:20:47.321 "enable_placement_id": 0, 00:20:47.321 "enable_zerocopy_send_server": true, 00:20:47.321 "enable_zerocopy_send_client": false, 00:20:47.321 "zerocopy_threshold": 0, 00:20:47.321 "tls_version": 0, 00:20:47.321 "enable_ktls": false 00:20:47.321 } 00:20:47.321 } 00:20:47.321 ] 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "subsystem": "vmd", 00:20:47.321 "config": [] 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "subsystem": "accel", 00:20:47.321 "config": [ 00:20:47.321 { 00:20:47.321 "method": "accel_set_options", 00:20:47.321 "params": { 00:20:47.321 "small_cache_size": 128, 00:20:47.321 "large_cache_size": 16, 00:20:47.321 "task_count": 2048, 00:20:47.321 "sequence_count": 2048, 00:20:47.321 "buf_count": 2048 00:20:47.321 } 00:20:47.321 } 00:20:47.321 ] 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "subsystem": "bdev", 00:20:47.321 "config": [ 00:20:47.321 { 00:20:47.321 "method": "bdev_set_options", 00:20:47.321 "params": { 00:20:47.321 "bdev_io_pool_size": 65535, 00:20:47.321 "bdev_io_cache_size": 256, 00:20:47.321 "bdev_auto_examine": true, 00:20:47.321 "iobuf_small_cache_size": 128, 00:20:47.321 "iobuf_large_cache_size": 16 00:20:47.321 } 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "method": "bdev_raid_set_options", 00:20:47.321 "params": { 00:20:47.321 "process_window_size_kb": 1024 00:20:47.321 } 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "method": "bdev_iscsi_set_options", 00:20:47.321 "params": { 00:20:47.321 "timeout_sec": 30 00:20:47.321 } 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "method": "bdev_nvme_set_options", 00:20:47.321 "params": { 00:20:47.321 "action_on_timeout": "none", 00:20:47.321 "timeout_us": 0, 00:20:47.321 "timeout_admin_us": 0, 00:20:47.321 "keep_alive_timeout_ms": 10000, 00:20:47.321 "transport_retry_count": 4, 00:20:47.321 "arbitration_burst": 0, 00:20:47.321 "low_priority_weight": 0, 00:20:47.321 "medium_priority_weight": 0, 00:20:47.321 "high_priority_weight": 0, 00:20:47.321 "nvme_adminq_poll_period_us": 10000, 00:20:47.321 "nvme_ioq_poll_period_us": 0, 00:20:47.321 "io_queue_requests": 512, 00:20:47.321 "delay_cmd_submit": true, 00:20:47.321 "bdev_retry_count": 3, 00:20:47.321 "transport_ack_timeout": 0, 00:20:47.321 "ctrlr_loss_timeout_sec": 0, 00:20:47.321 "reconnect_delay_sec": 0, 00:20:47.321 "fast_io_fail_timeout_sec": 0, 00:20:47.321 "generate_uuids": false, 00:20:47.321 "transport_tos": 0, 00:20:47.321 "io_path_stat": false, 00:20:47.321 "allow_accel_sequence": false 00:20:47.321 } 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "method": "bdev_nvme_attach_controller", 00:20:47.321 "params": { 00:20:47.321 "name": "TLSTEST", 00:20:47.321 "trtype": "TCP", 00:20:47.321 "adrfam": "IPv4", 00:20:47.321 "traddr": "10.0.0.2", 00:20:47.321 "trsvcid": "4420", 00:20:47.321 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.321 "prchk_reftag": false, 00:20:47.321 "prchk_guard": false, 00:20:47.321 "ctrlr_loss_timeout_sec": 0, 00:20:47.321 "reconnect_delay_sec": 0, 00:20:47.321 "fast_io_fail_timeout_sec": 0, 00:20:47.321 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:47.321 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.321 "hdgst": false, 00:20:47.321 "ddgst": false 00:20:47.321 } 00:20:47.321 }, 00:20:47.321 { 00:20:47.321 "method": "bdev_nvme_set_hotplug", 00:20:47.321 "params": { 00:20:47.321 "period_us": 100000, 00:20:47.321 "enable": false 00:20:47.322 } 00:20:47.322 }, 00:20:47.322 { 00:20:47.322 "method": "bdev_wait_for_examine" 00:20:47.322 } 00:20:47.322 ] 00:20:47.322 }, 00:20:47.322 { 00:20:47.322 "subsystem": "nbd", 00:20:47.322 "config": [] 00:20:47.322 } 00:20:47.322 ] 00:20:47.322 }' 00:20:47.322 17:13:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.322 17:13:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:47.322 17:13:03 -- common/autotest_common.sh@10 -- # set +x 00:20:47.322 [2024-07-20 17:13:03.340207] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:47.322 [2024-07-20 17:13:03.340294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid571171 ] 00:20:47.322 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.322 [2024-07-20 17:13:03.400786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.579 [2024-07-20 17:13:03.487887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.579 [2024-07-20 17:13:03.645403] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.143 17:13:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:48.143 17:13:04 -- common/autotest_common.sh@852 -- # return 0 00:20:48.143 17:13:04 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.406 Running I/O for 10 seconds... 00:20:58.375 00:20:58.375 Latency(us) 00:20:58.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.375 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:58.375 Verification LBA range: start 0x0 length 0x2000 00:20:58.375 TLSTESTn1 : 10.06 973.26 3.80 0.00 0.00 131208.60 8107.05 174762.67 00:20:58.375 =================================================================================================================== 00:20:58.375 Total : 973.26 3.80 0.00 0.00 131208.60 8107.05 174762.67 00:20:58.375 0 00:20:58.375 17:13:14 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.375 17:13:14 -- target/tls.sh@223 -- # killprocess 571171 00:20:58.375 17:13:14 -- common/autotest_common.sh@926 -- # '[' -z 571171 ']' 00:20:58.375 17:13:14 -- common/autotest_common.sh@930 -- # kill -0 571171 00:20:58.375 17:13:14 -- common/autotest_common.sh@931 -- # uname 00:20:58.375 17:13:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.375 17:13:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 571171 00:20:58.375 17:13:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:58.375 17:13:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:58.375 17:13:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 571171' 00:20:58.375 killing process with pid 571171 00:20:58.375 17:13:14 -- common/autotest_common.sh@945 -- # kill 571171 00:20:58.375 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.375 00:20:58.375 Latency(us) 00:20:58.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.375 =================================================================================================================== 00:20:58.375 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.375 17:13:14 -- common/autotest_common.sh@950 -- # wait 571171 00:20:58.633 17:13:14 -- target/tls.sh@224 -- # killprocess 571019 00:20:58.633 17:13:14 -- common/autotest_common.sh@926 -- # '[' -z 571019 ']' 00:20:58.633 17:13:14 -- common/autotest_common.sh@930 -- # kill -0 571019 00:20:58.633 17:13:14 -- common/autotest_common.sh@931 -- # uname 00:20:58.633 17:13:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:58.633 17:13:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 571019 00:20:58.633 17:13:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:58.633 17:13:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:58.633 17:13:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 571019' 00:20:58.633 killing process with pid 571019 00:20:58.633 17:13:14 -- common/autotest_common.sh@945 -- # kill 571019 00:20:58.633 17:13:14 -- common/autotest_common.sh@950 -- # wait 571019 00:20:58.890 17:13:14 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:58.890 17:13:14 -- target/tls.sh@227 -- # cleanup 00:20:58.890 17:13:14 -- target/tls.sh@15 -- # process_shm --id 0 00:20:58.890 17:13:14 -- common/autotest_common.sh@796 -- # type=--id 00:20:58.890 17:13:14 -- common/autotest_common.sh@797 -- # id=0 00:20:58.890 17:13:14 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:58.890 17:13:14 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:58.890 17:13:14 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:58.890 17:13:14 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:58.890 17:13:14 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:58.890 17:13:14 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:58.890 nvmf_trace.0 00:20:58.890 17:13:15 -- common/autotest_common.sh@811 -- # return 0 00:20:58.890 17:13:15 -- target/tls.sh@16 -- # killprocess 571171 00:20:58.890 17:13:15 -- common/autotest_common.sh@926 -- # '[' -z 571171 ']' 00:20:58.890 17:13:15 -- common/autotest_common.sh@930 -- # kill -0 571171 00:20:58.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (571171) - No such process 00:20:58.890 17:13:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 571171 is not found' 00:20:58.891 Process with pid 571171 is not found 00:20:58.891 17:13:15 -- target/tls.sh@17 -- # nvmftestfini 00:20:58.891 17:13:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:58.891 17:13:15 -- nvmf/common.sh@116 -- # sync 00:20:58.891 17:13:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:58.891 17:13:15 -- nvmf/common.sh@119 -- # set +e 00:20:58.891 17:13:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:58.891 17:13:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:58.891 rmmod nvme_tcp 00:20:59.148 rmmod nvme_fabrics 00:20:59.148 rmmod nvme_keyring 00:20:59.148 17:13:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.148 17:13:15 -- nvmf/common.sh@123 -- # set -e 00:20:59.148 17:13:15 -- nvmf/common.sh@124 -- # return 0 00:20:59.148 17:13:15 -- nvmf/common.sh@477 -- # '[' -n 571019 ']' 00:20:59.148 17:13:15 -- nvmf/common.sh@478 -- # killprocess 571019 00:20:59.148 17:13:15 -- common/autotest_common.sh@926 -- # '[' -z 571019 ']' 00:20:59.148 17:13:15 -- common/autotest_common.sh@930 -- # kill -0 571019 00:20:59.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (571019) - No such process 00:20:59.148 17:13:15 -- common/autotest_common.sh@953 -- # echo 'Process with pid 571019 is not found' 00:20:59.148 Process with pid 571019 is not found 00:20:59.148 17:13:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:59.148 17:13:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:59.148 17:13:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:59.148 17:13:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.148 17:13:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:59.148 17:13:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.148 17:13:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.148 17:13:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.043 17:13:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:01.043 17:13:17 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:01.043 00:21:01.043 real 1m13.536s 00:21:01.043 user 1m57.646s 00:21:01.043 sys 0m24.224s 00:21:01.043 17:13:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.043 17:13:17 -- common/autotest_common.sh@10 -- # set +x 00:21:01.043 ************************************ 00:21:01.043 END TEST nvmf_tls 00:21:01.043 ************************************ 00:21:01.043 17:13:17 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:01.043 17:13:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:01.043 17:13:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.043 17:13:17 -- common/autotest_common.sh@10 -- # set +x 00:21:01.043 ************************************ 00:21:01.043 START TEST nvmf_fips 00:21:01.043 ************************************ 00:21:01.043 17:13:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:01.043 * Looking for test storage... 00:21:01.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:01.302 17:13:17 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:01.302 17:13:17 -- nvmf/common.sh@7 -- # uname -s 00:21:01.302 17:13:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.302 17:13:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.302 17:13:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.302 17:13:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.302 17:13:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.302 17:13:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.302 17:13:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.302 17:13:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.302 17:13:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.302 17:13:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.302 17:13:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.302 17:13:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.302 17:13:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.302 17:13:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.302 17:13:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:01.302 17:13:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:01.302 17:13:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.302 17:13:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.302 17:13:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.302 17:13:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.302 17:13:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.302 17:13:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.302 17:13:17 -- paths/export.sh@5 -- # export PATH 00:21:01.302 17:13:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.302 17:13:17 -- nvmf/common.sh@46 -- # : 0 00:21:01.302 17:13:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:01.302 17:13:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:01.302 17:13:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:01.302 17:13:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.302 17:13:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.302 17:13:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:01.302 17:13:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:01.302 17:13:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:01.302 17:13:17 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:01.302 17:13:17 -- fips/fips.sh@89 -- # check_openssl_version 00:21:01.302 17:13:17 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:01.302 17:13:17 -- fips/fips.sh@85 -- # openssl version 00:21:01.302 17:13:17 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:01.302 17:13:17 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:01.302 17:13:17 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:01.302 17:13:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:01.302 17:13:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:01.302 17:13:17 -- scripts/common.sh@335 -- # IFS=.-: 00:21:01.302 17:13:17 -- scripts/common.sh@335 -- # read -ra ver1 00:21:01.302 17:13:17 -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.302 17:13:17 -- scripts/common.sh@336 -- # read -ra ver2 00:21:01.302 17:13:17 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:01.302 17:13:17 -- scripts/common.sh@339 -- # ver1_l=3 00:21:01.302 17:13:17 -- scripts/common.sh@340 -- # ver2_l=3 00:21:01.302 17:13:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:01.302 17:13:17 -- scripts/common.sh@343 -- # case "$op" in 00:21:01.302 17:13:17 -- scripts/common.sh@347 -- # : 1 00:21:01.302 17:13:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:01.302 17:13:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.303 17:13:17 -- scripts/common.sh@364 -- # decimal 3 00:21:01.303 17:13:17 -- scripts/common.sh@352 -- # local d=3 00:21:01.303 17:13:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:01.303 17:13:17 -- scripts/common.sh@354 -- # echo 3 00:21:01.303 17:13:17 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:01.303 17:13:17 -- scripts/common.sh@365 -- # decimal 3 00:21:01.303 17:13:17 -- scripts/common.sh@352 -- # local d=3 00:21:01.303 17:13:17 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:01.303 17:13:17 -- scripts/common.sh@354 -- # echo 3 00:21:01.303 17:13:17 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:01.303 17:13:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.303 17:13:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:01.303 17:13:17 -- scripts/common.sh@363 -- # (( v++ )) 00:21:01.303 17:13:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.303 17:13:17 -- scripts/common.sh@364 -- # decimal 0 00:21:01.303 17:13:17 -- scripts/common.sh@352 -- # local d=0 00:21:01.303 17:13:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.303 17:13:17 -- scripts/common.sh@354 -- # echo 0 00:21:01.303 17:13:17 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:01.303 17:13:17 -- scripts/common.sh@365 -- # decimal 0 00:21:01.303 17:13:17 -- scripts/common.sh@352 -- # local d=0 00:21:01.303 17:13:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.303 17:13:17 -- scripts/common.sh@354 -- # echo 0 00:21:01.303 17:13:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:01.303 17:13:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.303 17:13:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:01.303 17:13:17 -- scripts/common.sh@363 -- # (( v++ )) 00:21:01.303 17:13:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.303 17:13:17 -- scripts/common.sh@364 -- # decimal 9 00:21:01.303 17:13:17 -- scripts/common.sh@352 -- # local d=9 00:21:01.303 17:13:17 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:01.303 17:13:17 -- scripts/common.sh@354 -- # echo 9 00:21:01.303 17:13:17 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:01.303 17:13:17 -- scripts/common.sh@365 -- # decimal 0 00:21:01.303 17:13:17 -- scripts/common.sh@352 -- # local d=0 00:21:01.303 17:13:17 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:01.303 17:13:17 -- scripts/common.sh@354 -- # echo 0 00:21:01.303 17:13:17 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:01.303 17:13:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:01.303 17:13:17 -- scripts/common.sh@366 -- # return 0 00:21:01.303 17:13:17 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:01.303 17:13:17 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:01.303 17:13:17 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:01.303 17:13:17 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:01.303 17:13:17 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:01.303 17:13:17 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:01.303 17:13:17 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:01.303 17:13:17 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:01.303 17:13:17 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:01.303 17:13:17 -- fips/fips.sh@114 -- # build_openssl_config 00:21:01.303 17:13:17 -- fips/fips.sh@37 -- # cat 00:21:01.303 17:13:17 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:01.303 17:13:17 -- fips/fips.sh@58 -- # cat - 00:21:01.303 17:13:17 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:01.303 17:13:17 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:01.303 17:13:17 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:01.303 17:13:17 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:01.303 17:13:17 -- fips/fips.sh@117 -- # openssl list -providers 00:21:01.303 17:13:17 -- fips/fips.sh@117 -- # grep name 00:21:01.303 17:13:17 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:01.303 17:13:17 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:01.303 17:13:17 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:01.303 17:13:17 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:01.303 17:13:17 -- fips/fips.sh@128 -- # : 00:21:01.303 17:13:17 -- common/autotest_common.sh@640 -- # local es=0 00:21:01.303 17:13:17 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:01.303 17:13:17 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:01.303 17:13:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.303 17:13:17 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:01.303 17:13:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.303 17:13:17 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:01.303 17:13:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:01.303 17:13:17 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:01.303 17:13:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:01.303 17:13:17 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:01.303 Error setting digest 00:21:01.303 0052F3465F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:01.303 0052F3465F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:01.303 17:13:17 -- common/autotest_common.sh@643 -- # es=1 00:21:01.303 17:13:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:01.303 17:13:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:01.303 17:13:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:01.303 17:13:17 -- fips/fips.sh@131 -- # nvmftestinit 00:21:01.303 17:13:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:01.303 17:13:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.303 17:13:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:01.303 17:13:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:01.303 17:13:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:01.303 17:13:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.303 17:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.303 17:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.303 17:13:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:01.303 17:13:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:01.303 17:13:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:01.303 17:13:17 -- common/autotest_common.sh@10 -- # set +x 00:21:03.201 17:13:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:03.201 17:13:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:03.201 17:13:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:03.201 17:13:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:03.201 17:13:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:03.201 17:13:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:03.201 17:13:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:03.201 17:13:19 -- nvmf/common.sh@294 -- # net_devs=() 00:21:03.201 17:13:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:03.201 17:13:19 -- nvmf/common.sh@295 -- # e810=() 00:21:03.201 17:13:19 -- nvmf/common.sh@295 -- # local -ga e810 00:21:03.201 17:13:19 -- nvmf/common.sh@296 -- # x722=() 00:21:03.201 17:13:19 -- nvmf/common.sh@296 -- # local -ga x722 00:21:03.201 17:13:19 -- nvmf/common.sh@297 -- # mlx=() 00:21:03.201 17:13:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:03.201 17:13:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.201 17:13:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:03.201 17:13:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:03.201 17:13:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:03.201 17:13:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.201 17:13:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.201 17:13:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:03.201 17:13:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.201 17:13:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:03.201 17:13:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.201 17:13:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.201 17:13:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.201 17:13:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.201 17:13:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.201 17:13:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.201 17:13:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:03.201 17:13:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.201 17:13:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:03.201 17:13:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.201 17:13:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.201 17:13:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.201 17:13:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:03.201 17:13:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:03.201 17:13:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:03.201 17:13:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.201 17:13:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.201 17:13:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.201 17:13:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:03.201 17:13:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.201 17:13:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.201 17:13:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:03.201 17:13:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.201 17:13:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.201 17:13:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:03.201 17:13:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:03.201 17:13:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.201 17:13:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.201 17:13:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.201 17:13:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.201 17:13:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:03.201 17:13:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.201 17:13:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.201 17:13:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.201 17:13:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:03.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:21:03.201 00:21:03.201 --- 10.0.0.2 ping statistics --- 00:21:03.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.201 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:03.201 17:13:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:21:03.201 00:21:03.201 --- 10.0.0.1 ping statistics --- 00:21:03.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.201 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:03.201 17:13:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.201 17:13:19 -- nvmf/common.sh@410 -- # return 0 00:21:03.201 17:13:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:03.201 17:13:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.201 17:13:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:03.201 17:13:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.201 17:13:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:03.201 17:13:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:03.201 17:13:19 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:03.201 17:13:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:03.201 17:13:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:03.201 17:13:19 -- common/autotest_common.sh@10 -- # set +x 00:21:03.201 17:13:19 -- nvmf/common.sh@469 -- # nvmfpid=574504 00:21:03.201 17:13:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.201 17:13:19 -- nvmf/common.sh@470 -- # waitforlisten 574504 00:21:03.201 17:13:19 -- common/autotest_common.sh@819 -- # '[' -z 574504 ']' 00:21:03.201 17:13:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.201 17:13:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:03.201 17:13:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.201 17:13:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:03.201 17:13:19 -- common/autotest_common.sh@10 -- # set +x 00:21:03.459 [2024-07-20 17:13:19.374182] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:03.459 [2024-07-20 17:13:19.374278] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.459 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.459 [2024-07-20 17:13:19.440298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.459 [2024-07-20 17:13:19.526851] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:03.459 [2024-07-20 17:13:19.526985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.459 [2024-07-20 17:13:19.527000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.459 [2024-07-20 17:13:19.527012] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.459 [2024-07-20 17:13:19.527037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.391 17:13:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.392 17:13:20 -- common/autotest_common.sh@852 -- # return 0 00:21:04.392 17:13:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:04.392 17:13:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:04.392 17:13:20 -- common/autotest_common.sh@10 -- # set +x 00:21:04.392 17:13:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.392 17:13:20 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:04.392 17:13:20 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:04.392 17:13:20 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.392 17:13:20 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:04.392 17:13:20 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.392 17:13:20 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.392 17:13:20 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:04.392 17:13:20 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:04.649 [2024-07-20 17:13:20.610974] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.649 [2024-07-20 17:13:20.626958] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.649 [2024-07-20 17:13:20.627212] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.649 malloc0 00:21:04.649 17:13:20 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.649 17:13:20 -- fips/fips.sh@148 -- # bdevperf_pid=574664 00:21:04.649 17:13:20 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.649 17:13:20 -- fips/fips.sh@149 -- # waitforlisten 574664 /var/tmp/bdevperf.sock 00:21:04.649 17:13:20 -- common/autotest_common.sh@819 -- # '[' -z 574664 ']' 00:21:04.649 17:13:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.649 17:13:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.649 17:13:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.649 17:13:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.649 17:13:20 -- common/autotest_common.sh@10 -- # set +x 00:21:04.649 [2024-07-20 17:13:20.746995] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:04.649 [2024-07-20 17:13:20.747098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid574664 ] 00:21:04.649 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.906 [2024-07-20 17:13:20.808348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.906 [2024-07-20 17:13:20.898385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.837 17:13:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:05.837 17:13:21 -- common/autotest_common.sh@852 -- # return 0 00:21:05.837 17:13:21 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.837 [2024-07-20 17:13:21.949880] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.094 TLSTESTn1 00:21:06.094 17:13:22 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:06.094 Running I/O for 10 seconds... 00:21:18.309 00:21:18.309 Latency(us) 00:21:18.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.309 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.309 Verification LBA range: start 0x0 length 0x2000 00:21:18.309 TLSTESTn1 : 10.06 987.58 3.86 0.00 0.00 129322.41 5024.43 167772.16 00:21:18.309 =================================================================================================================== 00:21:18.309 Total : 987.58 3.86 0.00 0.00 129322.41 5024.43 167772.16 00:21:18.309 0 00:21:18.309 17:13:32 -- fips/fips.sh@1 -- # cleanup 00:21:18.309 17:13:32 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:18.309 17:13:32 -- common/autotest_common.sh@796 -- # type=--id 00:21:18.309 17:13:32 -- common/autotest_common.sh@797 -- # id=0 00:21:18.309 17:13:32 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:18.309 17:13:32 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:18.309 17:13:32 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:18.309 17:13:32 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:18.309 17:13:32 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:18.309 nvmf_trace.0 00:21:18.309 17:13:32 -- common/autotest_common.sh@811 -- # return 0 00:21:18.309 17:13:32 -- fips/fips.sh@16 -- # killprocess 574664 00:21:18.309 17:13:32 -- common/autotest_common.sh@926 -- # '[' -z 574664 ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@930 -- # kill -0 574664 00:21:18.309 17:13:32 -- common/autotest_common.sh@931 -- # uname 00:21:18.309 17:13:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 574664 00:21:18.309 17:13:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:18.309 17:13:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 574664' 00:21:18.309 killing process with pid 574664 00:21:18.309 17:13:32 -- common/autotest_common.sh@945 -- # kill 574664 00:21:18.309 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.309 00:21:18.309 Latency(us) 00:21:18.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.309 =================================================================================================================== 00:21:18.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.309 17:13:32 -- common/autotest_common.sh@950 -- # wait 574664 00:21:18.309 17:13:32 -- fips/fips.sh@17 -- # nvmftestfini 00:21:18.309 17:13:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:18.309 17:13:32 -- nvmf/common.sh@116 -- # sync 00:21:18.309 17:13:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:18.309 17:13:32 -- nvmf/common.sh@119 -- # set +e 00:21:18.309 17:13:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:18.309 17:13:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:18.309 rmmod nvme_tcp 00:21:18.309 rmmod nvme_fabrics 00:21:18.309 rmmod nvme_keyring 00:21:18.309 17:13:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:18.309 17:13:32 -- nvmf/common.sh@123 -- # set -e 00:21:18.309 17:13:32 -- nvmf/common.sh@124 -- # return 0 00:21:18.309 17:13:32 -- nvmf/common.sh@477 -- # '[' -n 574504 ']' 00:21:18.309 17:13:32 -- nvmf/common.sh@478 -- # killprocess 574504 00:21:18.309 17:13:32 -- common/autotest_common.sh@926 -- # '[' -z 574504 ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@930 -- # kill -0 574504 00:21:18.309 17:13:32 -- common/autotest_common.sh@931 -- # uname 00:21:18.309 17:13:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 574504 00:21:18.309 17:13:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:18.309 17:13:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:18.309 17:13:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 574504' 00:21:18.309 killing process with pid 574504 00:21:18.309 17:13:32 -- common/autotest_common.sh@945 -- # kill 574504 00:21:18.309 17:13:32 -- common/autotest_common.sh@950 -- # wait 574504 00:21:18.309 17:13:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:18.309 17:13:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:18.309 17:13:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:18.310 17:13:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.310 17:13:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:18.310 17:13:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.310 17:13:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.310 17:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.876 17:13:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:18.876 17:13:34 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.876 00:21:18.876 real 0m17.748s 00:21:18.876 user 0m21.812s 00:21:18.876 sys 0m6.661s 00:21:18.876 17:13:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.876 17:13:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.876 ************************************ 00:21:18.876 END TEST nvmf_fips 00:21:18.876 ************************************ 00:21:18.876 17:13:34 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:18.876 17:13:34 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:18.876 17:13:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:18.876 17:13:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:18.876 17:13:34 -- common/autotest_common.sh@10 -- # set +x 00:21:18.876 ************************************ 00:21:18.876 START TEST nvmf_fuzz 00:21:18.876 ************************************ 00:21:18.876 17:13:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:18.876 * Looking for test storage... 00:21:18.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.876 17:13:34 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.876 17:13:34 -- nvmf/common.sh@7 -- # uname -s 00:21:18.876 17:13:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.876 17:13:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.876 17:13:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.876 17:13:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.876 17:13:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.876 17:13:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.877 17:13:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.877 17:13:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.877 17:13:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.877 17:13:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.877 17:13:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.877 17:13:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.877 17:13:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.877 17:13:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.877 17:13:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.877 17:13:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.877 17:13:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.877 17:13:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.877 17:13:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.877 17:13:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.877 17:13:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.877 17:13:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.877 17:13:34 -- paths/export.sh@5 -- # export PATH 00:21:18.877 17:13:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.877 17:13:34 -- nvmf/common.sh@46 -- # : 0 00:21:18.877 17:13:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:18.877 17:13:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:18.877 17:13:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:18.877 17:13:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.877 17:13:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.877 17:13:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:18.877 17:13:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:18.877 17:13:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:18.877 17:13:35 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:18.877 17:13:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:18.877 17:13:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.877 17:13:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:18.877 17:13:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:18.877 17:13:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:18.877 17:13:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.877 17:13:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.877 17:13:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.877 17:13:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:18.877 17:13:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:18.877 17:13:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:18.877 17:13:35 -- common/autotest_common.sh@10 -- # set +x 00:21:20.778 17:13:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:20.778 17:13:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:20.778 17:13:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:20.778 17:13:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:20.778 17:13:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:20.778 17:13:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:20.778 17:13:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:20.778 17:13:36 -- nvmf/common.sh@294 -- # net_devs=() 00:21:20.778 17:13:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:20.778 17:13:36 -- nvmf/common.sh@295 -- # e810=() 00:21:20.778 17:13:36 -- nvmf/common.sh@295 -- # local -ga e810 00:21:20.778 17:13:36 -- nvmf/common.sh@296 -- # x722=() 00:21:20.778 17:13:36 -- nvmf/common.sh@296 -- # local -ga x722 00:21:20.778 17:13:36 -- nvmf/common.sh@297 -- # mlx=() 00:21:20.778 17:13:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:20.778 17:13:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.778 17:13:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:20.778 17:13:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:20.778 17:13:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:20.778 17:13:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.778 17:13:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:20.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:20.778 17:13:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.778 17:13:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:20.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:20.778 17:13:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:20.778 17:13:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.778 17:13:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.778 17:13:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.778 17:13:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.778 17:13:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:20.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:20.778 17:13:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.778 17:13:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.778 17:13:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.778 17:13:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.778 17:13:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.778 17:13:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:20.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:20.778 17:13:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.778 17:13:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:20.778 17:13:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:20.778 17:13:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:20.778 17:13:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:20.778 17:13:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.778 17:13:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.778 17:13:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.778 17:13:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:20.778 17:13:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.778 17:13:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.778 17:13:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:20.778 17:13:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.778 17:13:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.778 17:13:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:20.778 17:13:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:20.778 17:13:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.778 17:13:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.778 17:13:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.778 17:13:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.778 17:13:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:20.778 17:13:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.778 17:13:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.037 17:13:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.037 17:13:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:21.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:21:21.037 00:21:21.037 --- 10.0.0.2 ping statistics --- 00:21:21.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.037 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:21.038 17:13:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:21:21.038 00:21:21.038 --- 10.0.0.1 ping statistics --- 00:21:21.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.038 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:21.038 17:13:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.038 17:13:36 -- nvmf/common.sh@410 -- # return 0 00:21:21.038 17:13:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:21.038 17:13:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.038 17:13:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:21.038 17:13:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:21.038 17:13:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.038 17:13:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:21.038 17:13:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:21.038 17:13:36 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=577978 00:21:21.038 17:13:36 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:21.038 17:13:36 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:21.038 17:13:36 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 577978 00:21:21.038 17:13:36 -- common/autotest_common.sh@819 -- # '[' -z 577978 ']' 00:21:21.038 17:13:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.038 17:13:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:21.038 17:13:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.038 17:13:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:21.038 17:13:36 -- common/autotest_common.sh@10 -- # set +x 00:21:21.973 17:13:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:21.973 17:13:37 -- common/autotest_common.sh@852 -- # return 0 00:21:21.973 17:13:37 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.973 17:13:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.973 17:13:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.973 17:13:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.973 17:13:37 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:21.973 17:13:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.973 17:13:37 -- common/autotest_common.sh@10 -- # set +x 00:21:21.973 Malloc0 00:21:21.973 17:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.973 17:13:38 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:21.973 17:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.973 17:13:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.973 17:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.973 17:13:38 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.973 17:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.973 17:13:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.973 17:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.973 17:13:38 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.973 17:13:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.973 17:13:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.973 17:13:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.973 17:13:38 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:21.973 17:13:38 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:54.028 Fuzzing completed. Shutting down the fuzz application 00:21:54.028 00:21:54.028 Dumping successful admin opcodes: 00:21:54.028 8, 9, 10, 24, 00:21:54.028 Dumping successful io opcodes: 00:21:54.028 0, 9, 00:21:54.028 NS: 0x200003aeff00 I/O qp, Total commands completed: 453230, total successful commands: 2632, random_seed: 1968132544 00:21:54.028 NS: 0x200003aeff00 admin qp, Total commands completed: 56432, total successful commands: 448, random_seed: 3690901696 00:21:54.028 17:14:08 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:54.028 Fuzzing completed. Shutting down the fuzz application 00:21:54.028 00:21:54.028 Dumping successful admin opcodes: 00:21:54.028 24, 00:21:54.028 Dumping successful io opcodes: 00:21:54.028 00:21:54.028 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3211554549 00:21:54.028 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3211669893 00:21:54.028 17:14:09 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.028 17:14:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.028 17:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:54.028 17:14:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:54.028 17:14:09 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:54.028 17:14:09 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:54.028 17:14:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.028 17:14:09 -- nvmf/common.sh@116 -- # sync 00:21:54.028 17:14:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.028 17:14:09 -- nvmf/common.sh@119 -- # set +e 00:21:54.028 17:14:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.028 17:14:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.028 rmmod nvme_tcp 00:21:54.028 rmmod nvme_fabrics 00:21:54.028 rmmod nvme_keyring 00:21:54.028 17:14:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:54.028 17:14:09 -- nvmf/common.sh@123 -- # set -e 00:21:54.028 17:14:09 -- nvmf/common.sh@124 -- # return 0 00:21:54.028 17:14:09 -- nvmf/common.sh@477 -- # '[' -n 577978 ']' 00:21:54.028 17:14:09 -- nvmf/common.sh@478 -- # killprocess 577978 00:21:54.028 17:14:09 -- common/autotest_common.sh@926 -- # '[' -z 577978 ']' 00:21:54.028 17:14:09 -- common/autotest_common.sh@930 -- # kill -0 577978 00:21:54.028 17:14:09 -- common/autotest_common.sh@931 -- # uname 00:21:54.028 17:14:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:54.028 17:14:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 577978 00:21:54.028 17:14:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:54.028 17:14:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:54.028 17:14:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 577978' 00:21:54.028 killing process with pid 577978 00:21:54.028 17:14:09 -- common/autotest_common.sh@945 -- # kill 577978 00:21:54.028 17:14:09 -- common/autotest_common.sh@950 -- # wait 577978 00:21:54.286 17:14:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:54.286 17:14:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:54.286 17:14:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:54.286 17:14:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.286 17:14:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:54.286 17:14:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.286 17:14:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.286 17:14:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.184 17:14:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:56.184 17:14:12 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:56.184 00:21:56.184 real 0m37.379s 00:21:56.184 user 0m51.330s 00:21:56.184 sys 0m15.507s 00:21:56.184 17:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.184 17:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:56.184 ************************************ 00:21:56.184 END TEST nvmf_fuzz 00:21:56.184 ************************************ 00:21:56.184 17:14:12 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:56.184 17:14:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:56.184 17:14:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.184 17:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:56.184 ************************************ 00:21:56.184 START TEST nvmf_multiconnection 00:21:56.184 ************************************ 00:21:56.184 17:14:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:56.442 * Looking for test storage... 00:21:56.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.442 17:14:12 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.442 17:14:12 -- nvmf/common.sh@7 -- # uname -s 00:21:56.442 17:14:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.442 17:14:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.442 17:14:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.442 17:14:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.442 17:14:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.442 17:14:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.442 17:14:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.442 17:14:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.442 17:14:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.442 17:14:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.442 17:14:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.442 17:14:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.442 17:14:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.442 17:14:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.442 17:14:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.442 17:14:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.442 17:14:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.442 17:14:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.442 17:14:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.443 17:14:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.443 17:14:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.443 17:14:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.443 17:14:12 -- paths/export.sh@5 -- # export PATH 00:21:56.443 17:14:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.443 17:14:12 -- nvmf/common.sh@46 -- # : 0 00:21:56.443 17:14:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.443 17:14:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.443 17:14:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.443 17:14:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.443 17:14:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.443 17:14:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.443 17:14:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.443 17:14:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.443 17:14:12 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.443 17:14:12 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.443 17:14:12 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:56.443 17:14:12 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:56.443 17:14:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.443 17:14:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.443 17:14:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.443 17:14:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.443 17:14:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.443 17:14:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.443 17:14:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.443 17:14:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.443 17:14:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:56.443 17:14:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:56.443 17:14:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:56.443 17:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:58.375 17:14:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:58.375 17:14:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:58.375 17:14:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:58.375 17:14:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:58.375 17:14:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:58.375 17:14:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:58.375 17:14:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:58.375 17:14:14 -- nvmf/common.sh@294 -- # net_devs=() 00:21:58.375 17:14:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:58.375 17:14:14 -- nvmf/common.sh@295 -- # e810=() 00:21:58.375 17:14:14 -- nvmf/common.sh@295 -- # local -ga e810 00:21:58.375 17:14:14 -- nvmf/common.sh@296 -- # x722=() 00:21:58.375 17:14:14 -- nvmf/common.sh@296 -- # local -ga x722 00:21:58.375 17:14:14 -- nvmf/common.sh@297 -- # mlx=() 00:21:58.375 17:14:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:58.375 17:14:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.375 17:14:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:58.375 17:14:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:58.375 17:14:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:58.375 17:14:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:58.375 17:14:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.375 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.375 17:14:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:58.375 17:14:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.375 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.375 17:14:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:58.375 17:14:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:58.375 17:14:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.375 17:14:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:58.375 17:14:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.375 17:14:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.375 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.375 17:14:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.375 17:14:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:58.375 17:14:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.375 17:14:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:58.375 17:14:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.375 17:14:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.375 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.375 17:14:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.375 17:14:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:58.375 17:14:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:58.375 17:14:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:58.375 17:14:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.375 17:14:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.375 17:14:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.375 17:14:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:58.375 17:14:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.375 17:14:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.375 17:14:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:58.375 17:14:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.375 17:14:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.375 17:14:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:58.375 17:14:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:58.375 17:14:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.375 17:14:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.375 17:14:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.375 17:14:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.375 17:14:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:58.375 17:14:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.375 17:14:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.375 17:14:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.375 17:14:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:58.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:21:58.375 00:21:58.375 --- 10.0.0.2 ping statistics --- 00:21:58.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.375 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:21:58.375 17:14:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:21:58.375 00:21:58.375 --- 10.0.0.1 ping statistics --- 00:21:58.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.375 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:21:58.375 17:14:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.375 17:14:14 -- nvmf/common.sh@410 -- # return 0 00:21:58.375 17:14:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:58.375 17:14:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.375 17:14:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:58.375 17:14:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.375 17:14:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:58.375 17:14:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:58.375 17:14:14 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:58.375 17:14:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:58.375 17:14:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:58.375 17:14:14 -- common/autotest_common.sh@10 -- # set +x 00:21:58.375 17:14:14 -- nvmf/common.sh@469 -- # nvmfpid=583839 00:21:58.375 17:14:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.375 17:14:14 -- nvmf/common.sh@470 -- # waitforlisten 583839 00:21:58.375 17:14:14 -- common/autotest_common.sh@819 -- # '[' -z 583839 ']' 00:21:58.375 17:14:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.375 17:14:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.375 17:14:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.375 17:14:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.375 17:14:14 -- common/autotest_common.sh@10 -- # set +x 00:21:58.375 [2024-07-20 17:14:14.502065] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:58.375 [2024-07-20 17:14:14.502166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.633 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.633 [2024-07-20 17:14:14.571928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.633 [2024-07-20 17:14:14.663269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:58.633 [2024-07-20 17:14:14.663448] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.633 [2024-07-20 17:14:14.663468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.633 [2024-07-20 17:14:14.663484] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.633 [2024-07-20 17:14:14.663567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.633 [2024-07-20 17:14:14.663636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.633 [2024-07-20 17:14:14.663724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.633 [2024-07-20 17:14:14.663726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.563 17:14:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.563 17:14:15 -- common/autotest_common.sh@852 -- # return 0 00:21:59.563 17:14:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:59.563 17:14:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.563 17:14:15 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 [2024-07-20 17:14:15.459370] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.563 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 Malloc1 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 [2024-07-20 17:14:15.514306] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.563 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 Malloc2 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.563 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 Malloc3 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.563 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 Malloc4 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.563 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 Malloc5 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.563 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.563 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.563 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:59.563 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.563 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 Malloc6 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.821 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 Malloc7 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.821 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 Malloc8 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.821 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.821 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:59.821 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.821 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.822 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 Malloc9 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.822 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 Malloc10 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.822 17:14:15 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 Malloc11 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:21:59.822 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.822 17:14:15 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:59.822 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.822 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:00.079 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.079 17:14:15 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:00.079 17:14:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.079 17:14:15 -- common/autotest_common.sh@10 -- # set +x 00:22:00.079 17:14:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.079 17:14:15 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:00.079 17:14:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.079 17:14:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:00.642 17:14:16 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:00.642 17:14:16 -- common/autotest_common.sh@1177 -- # local i=0 00:22:00.642 17:14:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:00.642 17:14:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:00.642 17:14:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:02.536 17:14:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:02.536 17:14:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:02.536 17:14:18 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:02.536 17:14:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:02.536 17:14:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.536 17:14:18 -- common/autotest_common.sh@1187 -- # return 0 00:22:02.536 17:14:18 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.536 17:14:18 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:03.101 17:14:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:03.101 17:14:19 -- common/autotest_common.sh@1177 -- # local i=0 00:22:03.101 17:14:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:03.358 17:14:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:03.358 17:14:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:05.252 17:14:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:05.252 17:14:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:05.252 17:14:21 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:05.252 17:14:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:05.252 17:14:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:05.252 17:14:21 -- common/autotest_common.sh@1187 -- # return 0 00:22:05.252 17:14:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:05.252 17:14:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:05.817 17:14:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:05.817 17:14:21 -- common/autotest_common.sh@1177 -- # local i=0 00:22:05.817 17:14:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.817 17:14:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:05.817 17:14:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:08.340 17:14:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:08.340 17:14:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:08.340 17:14:23 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:08.340 17:14:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:08.340 17:14:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.340 17:14:23 -- common/autotest_common.sh@1187 -- # return 0 00:22:08.340 17:14:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.341 17:14:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:08.597 17:14:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:08.597 17:14:24 -- common/autotest_common.sh@1177 -- # local i=0 00:22:08.597 17:14:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.597 17:14:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:08.597 17:14:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:10.488 17:14:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:10.488 17:14:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:10.488 17:14:26 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:10.488 17:14:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:10.488 17:14:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.488 17:14:26 -- common/autotest_common.sh@1187 -- # return 0 00:22:10.488 17:14:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:10.488 17:14:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:11.051 17:14:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:11.051 17:14:27 -- common/autotest_common.sh@1177 -- # local i=0 00:22:11.051 17:14:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:11.051 17:14:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:11.051 17:14:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:13.571 17:14:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:13.571 17:14:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:13.571 17:14:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:13.571 17:14:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:13.571 17:14:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:13.571 17:14:29 -- common/autotest_common.sh@1187 -- # return 0 00:22:13.571 17:14:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:13.571 17:14:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:13.828 17:14:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:13.828 17:14:29 -- common/autotest_common.sh@1177 -- # local i=0 00:22:13.828 17:14:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.828 17:14:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:13.828 17:14:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:16.368 17:14:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:16.368 17:14:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:16.368 17:14:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:16.369 17:14:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:16.369 17:14:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:16.369 17:14:31 -- common/autotest_common.sh@1187 -- # return 0 00:22:16.369 17:14:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:16.369 17:14:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:16.625 17:14:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:16.625 17:14:32 -- common/autotest_common.sh@1177 -- # local i=0 00:22:16.625 17:14:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:16.625 17:14:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:16.625 17:14:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:18.537 17:14:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:18.537 17:14:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:18.537 17:14:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:18.537 17:14:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:18.537 17:14:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.537 17:14:34 -- common/autotest_common.sh@1187 -- # return 0 00:22:18.537 17:14:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.537 17:14:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:19.468 17:14:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:19.468 17:14:35 -- common/autotest_common.sh@1177 -- # local i=0 00:22:19.468 17:14:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:19.468 17:14:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:19.468 17:14:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:21.988 17:14:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:21.988 17:14:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:21.988 17:14:37 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:21.988 17:14:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:21.988 17:14:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:21.988 17:14:37 -- common/autotest_common.sh@1187 -- # return 0 00:22:21.988 17:14:37 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.988 17:14:37 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:22.255 17:14:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:22.255 17:14:38 -- common/autotest_common.sh@1177 -- # local i=0 00:22:22.255 17:14:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.255 17:14:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:22.255 17:14:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:24.152 17:14:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:24.152 17:14:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:24.152 17:14:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:24.152 17:14:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:24.152 17:14:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.152 17:14:40 -- common/autotest_common.sh@1187 -- # return 0 00:22:24.152 17:14:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.152 17:14:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:25.084 17:14:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:25.084 17:14:41 -- common/autotest_common.sh@1177 -- # local i=0 00:22:25.084 17:14:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.084 17:14:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:25.084 17:14:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:26.980 17:14:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:26.980 17:14:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:26.980 17:14:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:26.980 17:14:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:26.980 17:14:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.980 17:14:43 -- common/autotest_common.sh@1187 -- # return 0 00:22:26.980 17:14:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.980 17:14:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:27.908 17:14:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:27.908 17:14:43 -- common/autotest_common.sh@1177 -- # local i=0 00:22:27.908 17:14:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.908 17:14:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:27.908 17:14:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:29.798 17:14:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:29.798 17:14:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:29.798 17:14:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:29.798 17:14:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:29.798 17:14:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.798 17:14:45 -- common/autotest_common.sh@1187 -- # return 0 00:22:29.798 17:14:45 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:29.798 [global] 00:22:29.798 thread=1 00:22:29.798 invalidate=1 00:22:29.798 rw=read 00:22:29.798 time_based=1 00:22:29.798 runtime=10 00:22:29.798 ioengine=libaio 00:22:29.798 direct=1 00:22:29.798 bs=262144 00:22:29.798 iodepth=64 00:22:29.798 norandommap=1 00:22:29.798 numjobs=1 00:22:29.798 00:22:29.798 [job0] 00:22:29.798 filename=/dev/nvme0n1 00:22:29.798 [job1] 00:22:29.798 filename=/dev/nvme10n1 00:22:29.798 [job2] 00:22:29.798 filename=/dev/nvme1n1 00:22:29.798 [job3] 00:22:29.798 filename=/dev/nvme2n1 00:22:29.798 [job4] 00:22:29.798 filename=/dev/nvme3n1 00:22:29.798 [job5] 00:22:29.798 filename=/dev/nvme4n1 00:22:29.798 [job6] 00:22:29.798 filename=/dev/nvme5n1 00:22:29.798 [job7] 00:22:29.798 filename=/dev/nvme6n1 00:22:29.798 [job8] 00:22:29.798 filename=/dev/nvme7n1 00:22:29.798 [job9] 00:22:29.798 filename=/dev/nvme8n1 00:22:29.798 [job10] 00:22:29.798 filename=/dev/nvme9n1 00:22:30.054 Could not set queue depth (nvme0n1) 00:22:30.054 Could not set queue depth (nvme10n1) 00:22:30.054 Could not set queue depth (nvme1n1) 00:22:30.054 Could not set queue depth (nvme2n1) 00:22:30.054 Could not set queue depth (nvme3n1) 00:22:30.054 Could not set queue depth (nvme4n1) 00:22:30.054 Could not set queue depth (nvme5n1) 00:22:30.054 Could not set queue depth (nvme6n1) 00:22:30.054 Could not set queue depth (nvme7n1) 00:22:30.054 Could not set queue depth (nvme8n1) 00:22:30.054 Could not set queue depth (nvme9n1) 00:22:30.054 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:30.054 fio-3.35 00:22:30.054 Starting 11 threads 00:22:42.264 00:22:42.264 job0: (groupid=0, jobs=1): err= 0: pid=588196: Sat Jul 20 17:14:56 2024 00:22:42.264 read: IOPS=472, BW=118MiB/s (124MB/s)(1206MiB/10211msec) 00:22:42.264 slat (usec): min=9, max=565401, avg=1344.35, stdev=10884.86 00:22:42.264 clat (msec): min=2, max=1269, avg=134.07, stdev=121.65 00:22:42.264 lat (msec): min=2, max=1269, avg=135.42, stdev=122.81 00:22:42.264 clat percentiles (msec): 00:22:42.264 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 43], 20.00th=[ 72], 00:22:42.264 | 30.00th=[ 90], 40.00th=[ 101], 50.00th=[ 112], 60.00th=[ 127], 00:22:42.264 | 70.00th=[ 146], 80.00th=[ 171], 90.00th=[ 205], 95.00th=[ 264], 00:22:42.264 | 99.00th=[ 785], 99.50th=[ 927], 99.90th=[ 1045], 99.95th=[ 1234], 00:22:42.264 | 99.99th=[ 1267] 00:22:42.264 bw ( KiB/s): min=30720, max=228864, per=9.11%, avg=128242.53, stdev=46348.61, samples=19 00:22:42.264 iops : min= 120, max= 894, avg=500.95, stdev=181.05, samples=19 00:22:42.264 lat (msec) : 4=0.58%, 10=3.23%, 20=2.82%, 50=6.14%, 100=27.22% 00:22:42.264 lat (msec) : 250=54.43%, 500=2.96%, 750=1.29%, 1000=1.08%, 2000=0.25% 00:22:42.264 cpu : usr=0.31%, sys=1.40%, ctx=1496, majf=0, minf=3721 00:22:42.264 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.264 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.264 issued rwts: total=4823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.264 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.264 job1: (groupid=0, jobs=1): err= 0: pid=588197: Sat Jul 20 17:14:56 2024 00:22:42.264 read: IOPS=481, BW=120MiB/s (126MB/s)(1212MiB/10075msec) 00:22:42.264 slat (usec): min=8, max=171956, avg=1110.01, stdev=7051.50 00:22:42.264 clat (usec): min=1610, max=836984, avg=131763.53, stdev=115308.33 00:22:42.264 lat (usec): min=1679, max=837032, avg=132873.53, stdev=116430.23 00:22:42.264 clat percentiles (msec): 00:22:42.264 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 30], 20.00th=[ 52], 00:22:42.264 | 30.00th=[ 77], 40.00th=[ 96], 50.00th=[ 118], 60.00th=[ 133], 00:22:42.264 | 70.00th=[ 150], 80.00th=[ 171], 90.00th=[ 220], 95.00th=[ 279], 00:22:42.264 | 99.00th=[ 659], 99.50th=[ 776], 99.90th=[ 835], 99.95th=[ 835], 00:22:42.264 | 99.99th=[ 835] 00:22:42.264 bw ( KiB/s): min=24576, max=244736, per=8.71%, avg=122521.60, stdev=55970.09, samples=20 00:22:42.264 iops : min= 96, max= 956, avg=478.60, stdev=218.63, samples=20 00:22:42.264 lat (msec) : 2=0.02%, 4=0.16%, 10=1.05%, 20=4.52%, 50=13.94% 00:22:42.264 lat (msec) : 100=21.88%, 250=51.50%, 500=4.27%, 750=1.79%, 1000=0.87% 00:22:42.265 cpu : usr=0.16%, sys=1.38%, ctx=1609, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=4849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job2: (groupid=0, jobs=1): err= 0: pid=588198: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=610, BW=153MiB/s (160MB/s)(1529MiB/10021msec) 00:22:42.265 slat (usec): min=8, max=278494, avg=1151.15, stdev=6668.95 00:22:42.265 clat (msec): min=3, max=762, avg=103.63, stdev=94.73 00:22:42.265 lat (msec): min=3, max=848, avg=104.78, stdev=95.28 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 18], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 49], 00:22:42.265 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 84], 00:22:42.265 | 70.00th=[ 108], 80.00th=[ 142], 90.00th=[ 207], 95.00th=[ 296], 00:22:42.265 | 99.00th=[ 506], 99.50th=[ 625], 99.90th=[ 743], 99.95th=[ 760], 00:22:42.265 | 99.99th=[ 760] 00:22:42.265 bw ( KiB/s): min=33792, max=339968, per=11.01%, avg=154982.40, stdev=93855.13, samples=20 00:22:42.265 iops : min= 132, max= 1328, avg=605.40, stdev=366.62, samples=20 00:22:42.265 lat (msec) : 4=0.03%, 10=0.20%, 20=0.95%, 50=25.73%, 100=40.15% 00:22:42.265 lat (msec) : 250=26.39%, 500=5.49%, 750=0.98%, 1000=0.08% 00:22:42.265 cpu : usr=0.31%, sys=1.89%, ctx=1498, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=6117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job3: (groupid=0, jobs=1): err= 0: pid=588199: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=532, BW=133MiB/s (139MB/s)(1340MiB/10071msec) 00:22:42.265 slat (usec): min=9, max=194560, avg=1577.17, stdev=7676.70 00:22:42.265 clat (msec): min=2, max=674, avg=118.62, stdev=93.48 00:22:42.265 lat (msec): min=3, max=674, avg=120.19, stdev=94.59 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 48], 00:22:42.265 | 30.00th=[ 59], 40.00th=[ 81], 50.00th=[ 100], 60.00th=[ 115], 00:22:42.265 | 70.00th=[ 136], 80.00th=[ 163], 90.00th=[ 215], 95.00th=[ 309], 00:22:42.265 | 99.00th=[ 527], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 600], 00:22:42.265 | 99.99th=[ 676] 00:22:42.265 bw ( KiB/s): min=34816, max=330752, per=9.63%, avg=135577.60, stdev=70258.40, samples=20 00:22:42.265 iops : min= 136, max= 1292, avg=529.60, stdev=274.45, samples=20 00:22:42.265 lat (msec) : 4=0.04%, 10=0.32%, 20=2.82%, 50=20.28%, 100=26.93% 00:22:42.265 lat (msec) : 250=41.50%, 500=6.61%, 750=1.51% 00:22:42.265 cpu : usr=0.28%, sys=1.75%, ctx=1311, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=5359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job4: (groupid=0, jobs=1): err= 0: pid=588200: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=491, BW=123MiB/s (129MB/s)(1232MiB/10027msec) 00:22:42.265 slat (usec): min=9, max=555649, avg=1344.77, stdev=11686.32 00:22:42.265 clat (msec): min=3, max=687, avg=128.76, stdev=102.54 00:22:42.265 lat (msec): min=3, max=687, avg=130.11, stdev=103.30 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 45], 20.00th=[ 63], 00:22:42.265 | 30.00th=[ 77], 40.00th=[ 92], 50.00th=[ 110], 60.00th=[ 125], 00:22:42.265 | 70.00th=[ 142], 80.00th=[ 165], 90.00th=[ 222], 95.00th=[ 326], 00:22:42.265 | 99.00th=[ 609], 99.50th=[ 676], 99.90th=[ 684], 99.95th=[ 684], 00:22:42.265 | 99.99th=[ 693] 00:22:42.265 bw ( KiB/s): min=39936, max=231424, per=8.85%, avg=124569.60, stdev=51891.94, samples=20 00:22:42.265 iops : min= 156, max= 904, avg=486.60, stdev=202.70, samples=20 00:22:42.265 lat (msec) : 4=0.04%, 10=2.09%, 20=1.48%, 50=9.78%, 100=31.10% 00:22:42.265 lat (msec) : 250=48.22%, 500=4.87%, 750=2.41% 00:22:42.265 cpu : usr=0.24%, sys=1.16%, ctx=1369, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=4929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job5: (groupid=0, jobs=1): err= 0: pid=588202: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=455, BW=114MiB/s (119MB/s)(1154MiB/10135msec) 00:22:42.265 slat (usec): min=13, max=346133, avg=2006.58, stdev=9355.11 00:22:42.265 clat (msec): min=3, max=1017, avg=138.44, stdev=141.17 00:22:42.265 lat (msec): min=3, max=1107, avg=140.44, stdev=142.51 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 7], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 61], 00:22:42.265 | 30.00th=[ 82], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 120], 00:22:42.265 | 70.00th=[ 138], 80.00th=[ 167], 90.00th=[ 213], 95.00th=[ 363], 00:22:42.265 | 99.00th=[ 818], 99.50th=[ 894], 99.90th=[ 995], 99.95th=[ 995], 00:22:42.265 | 99.99th=[ 1020] 00:22:42.265 bw ( KiB/s): min=11264, max=293888, per=8.28%, avg=116505.60, stdev=71700.11, samples=20 00:22:42.265 iops : min= 44, max= 1148, avg=455.10, stdev=280.08, samples=20 00:22:42.265 lat (msec) : 4=0.02%, 10=2.30%, 20=1.30%, 50=10.42%, 100=29.12% 00:22:42.265 lat (msec) : 250=50.03%, 500=2.56%, 750=2.06%, 1000=2.15%, 2000=0.04% 00:22:42.265 cpu : usr=0.35%, sys=1.61%, ctx=1030, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=4615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job6: (groupid=0, jobs=1): err= 0: pid=588213: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=475, BW=119MiB/s (125MB/s)(1205MiB/10137msec) 00:22:42.265 slat (usec): min=9, max=338807, avg=1380.88, stdev=8763.96 00:22:42.265 clat (usec): min=1829, max=1034.7k, avg=133102.96, stdev=127386.41 00:22:42.265 lat (usec): min=1849, max=1104.7k, avg=134483.84, stdev=129050.80 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 12], 5.00th=[ 27], 10.00th=[ 44], 20.00th=[ 58], 00:22:42.265 | 30.00th=[ 71], 40.00th=[ 84], 50.00th=[ 106], 60.00th=[ 124], 00:22:42.265 | 70.00th=[ 138], 80.00th=[ 178], 90.00th=[ 226], 95.00th=[ 317], 00:22:42.265 | 99.00th=[ 785], 99.50th=[ 827], 99.90th=[ 1011], 99.95th=[ 1011], 00:22:42.265 | 99.99th=[ 1036] 00:22:42.265 bw ( KiB/s): min=15872, max=257024, per=8.65%, avg=121779.20, stdev=66455.29, samples=20 00:22:42.265 iops : min= 62, max= 1004, avg=475.70, stdev=259.59, samples=20 00:22:42.265 lat (msec) : 2=0.02%, 4=0.04%, 10=0.46%, 20=2.74%, 50=10.62% 00:22:42.265 lat (msec) : 100=34.29%, 250=44.60%, 500=4.36%, 750=1.58%, 1000=1.14% 00:22:42.265 lat (msec) : 2000=0.17% 00:22:42.265 cpu : usr=0.24%, sys=1.23%, ctx=1518, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=4821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job7: (groupid=0, jobs=1): err= 0: pid=588221: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=441, BW=110MiB/s (116MB/s)(1118MiB/10136msec) 00:22:42.265 slat (usec): min=9, max=183881, avg=1960.87, stdev=7979.65 00:22:42.265 clat (usec): min=1835, max=1124.9k, avg=142956.91, stdev=136666.32 00:22:42.265 lat (usec): min=1892, max=1124.9k, avg=144917.77, stdev=138155.76 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 17], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 72], 00:22:42.265 | 30.00th=[ 80], 40.00th=[ 95], 50.00th=[ 114], 60.00th=[ 130], 00:22:42.265 | 70.00th=[ 150], 80.00th=[ 171], 90.00th=[ 197], 95.00th=[ 284], 00:22:42.265 | 99.00th=[ 818], 99.50th=[ 969], 99.90th=[ 1062], 99.95th=[ 1062], 00:22:42.265 | 99.99th=[ 1133] 00:22:42.265 bw ( KiB/s): min=10240, max=244736, per=8.02%, avg=112870.40, stdev=63924.93, samples=20 00:22:42.265 iops : min= 40, max= 956, avg=440.90, stdev=249.71, samples=20 00:22:42.265 lat (msec) : 2=0.04%, 4=0.40%, 10=0.29%, 20=0.34%, 50=2.46% 00:22:42.265 lat (msec) : 100=39.19%, 250=51.80%, 500=1.50%, 750=2.39%, 1000=1.23% 00:22:42.265 lat (msec) : 2000=0.36% 00:22:42.265 cpu : usr=0.33%, sys=1.43%, ctx=1078, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.265 issued rwts: total=4473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.265 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.265 job8: (groupid=0, jobs=1): err= 0: pid=588246: Sat Jul 20 17:14:56 2024 00:22:42.265 read: IOPS=501, BW=125MiB/s (132MB/s)(1273MiB/10147msec) 00:22:42.265 slat (usec): min=10, max=644600, avg=1528.25, stdev=10645.64 00:22:42.265 clat (msec): min=8, max=934, avg=125.90, stdev=147.94 00:22:42.265 lat (msec): min=8, max=1465, avg=127.43, stdev=149.44 00:22:42.265 clat percentiles (msec): 00:22:42.265 | 1.00th=[ 19], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 60], 00:22:42.265 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 80], 60.00th=[ 94], 00:22:42.265 | 70.00th=[ 109], 80.00th=[ 138], 90.00th=[ 207], 95.00th=[ 372], 00:22:42.265 | 99.00th=[ 835], 99.50th=[ 894], 99.90th=[ 936], 99.95th=[ 936], 00:22:42.265 | 99.99th=[ 936] 00:22:42.265 bw ( KiB/s): min=16384, max=257024, per=9.63%, avg=135505.32, stdev=81609.09, samples=19 00:22:42.265 iops : min= 64, max= 1004, avg=529.32, stdev=318.79, samples=19 00:22:42.265 lat (msec) : 10=0.08%, 20=1.12%, 50=6.15%, 100=57.52%, 250=28.65% 00:22:42.265 lat (msec) : 500=2.34%, 750=1.36%, 1000=2.79% 00:22:42.265 cpu : usr=0.23%, sys=1.92%, ctx=1308, majf=0, minf=4097 00:22:42.265 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:42.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.266 issued rwts: total=5092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.266 job9: (groupid=0, jobs=1): err= 0: pid=588271: Sat Jul 20 17:14:56 2024 00:22:42.266 read: IOPS=486, BW=122MiB/s (128MB/s)(1227MiB/10077msec) 00:22:42.266 slat (usec): min=9, max=299179, avg=1147.47, stdev=7447.70 00:22:42.266 clat (msec): min=4, max=870, avg=130.19, stdev=98.42 00:22:42.266 lat (msec): min=4, max=886, avg=131.34, stdev=99.67 00:22:42.266 clat percentiles (msec): 00:22:42.266 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 47], 20.00th=[ 64], 00:22:42.266 | 30.00th=[ 74], 40.00th=[ 90], 50.00th=[ 113], 60.00th=[ 133], 00:22:42.266 | 70.00th=[ 150], 80.00th=[ 182], 90.00th=[ 222], 95.00th=[ 271], 00:22:42.266 | 99.00th=[ 575], 99.50th=[ 743], 99.90th=[ 844], 99.95th=[ 869], 00:22:42.266 | 99.99th=[ 869] 00:22:42.266 bw ( KiB/s): min=15872, max=244224, per=8.81%, avg=123980.80, stdev=55647.76, samples=20 00:22:42.266 iops : min= 62, max= 954, avg=484.30, stdev=217.37, samples=20 00:22:42.266 lat (msec) : 10=0.51%, 20=2.02%, 50=8.97%, 100=31.91%, 250=48.95% 00:22:42.266 lat (msec) : 500=6.22%, 750=0.96%, 1000=0.47% 00:22:42.266 cpu : usr=0.21%, sys=1.35%, ctx=1597, majf=0, minf=4097 00:22:42.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:42.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.266 issued rwts: total=4907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.266 job10: (groupid=0, jobs=1): err= 0: pid=588292: Sat Jul 20 17:14:56 2024 00:22:42.266 read: IOPS=606, BW=152MiB/s (159MB/s)(1538MiB/10143msec) 00:22:42.266 slat (usec): min=8, max=564722, avg=1329.51, stdev=8938.54 00:22:42.266 clat (msec): min=2, max=831, avg=104.08, stdev=103.81 00:22:42.266 lat (msec): min=2, max=931, avg=105.41, stdev=104.98 00:22:42.266 clat percentiles (msec): 00:22:42.266 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 34], 20.00th=[ 43], 00:22:42.266 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 82], 60.00th=[ 103], 00:22:42.266 | 70.00th=[ 126], 80.00th=[ 144], 90.00th=[ 169], 95.00th=[ 215], 00:22:42.266 | 99.00th=[ 735], 99.50th=[ 810], 99.90th=[ 827], 99.95th=[ 827], 00:22:42.266 | 99.99th=[ 835] 00:22:42.266 bw ( KiB/s): min=10240, max=321024, per=11.08%, avg=155887.60, stdev=73516.20, samples=20 00:22:42.266 iops : min= 40, max= 1254, avg=608.90, stdev=287.21, samples=20 00:22:42.266 lat (msec) : 4=0.07%, 10=0.33%, 20=3.46%, 50=24.62%, 100=30.46% 00:22:42.266 lat (msec) : 250=37.51%, 500=1.82%, 750=0.80%, 1000=0.94% 00:22:42.266 cpu : usr=0.35%, sys=1.98%, ctx=1709, majf=0, minf=4097 00:22:42.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:42.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:42.266 issued rwts: total=6153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.266 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:42.266 00:22:42.266 Run status group 0 (all jobs): 00:22:42.266 READ: bw=1374MiB/s (1441MB/s), 110MiB/s-153MiB/s (116MB/s-160MB/s), io=13.7GiB (14.7GB), run=10021-10211msec 00:22:42.266 00:22:42.266 Disk stats (read/write): 00:22:42.266 nvme0n1: ios=9525/0, merge=0/0, ticks=1220675/0, in_queue=1220675, util=96.97% 00:22:42.266 nvme10n1: ios=9485/0, merge=0/0, ticks=1238507/0, in_queue=1238507, util=97.12% 00:22:42.266 nvme1n1: ios=11915/0, merge=0/0, ticks=1234799/0, in_queue=1234799, util=97.42% 00:22:42.266 nvme2n1: ios=10450/0, merge=0/0, ticks=1229701/0, in_queue=1229701, util=97.59% 00:22:42.266 nvme3n1: ios=9512/0, merge=0/0, ticks=1234658/0, in_queue=1234658, util=97.68% 00:22:42.266 nvme4n1: ios=9081/0, merge=0/0, ticks=1178578/0, in_queue=1178578, util=98.06% 00:22:42.266 nvme5n1: ios=9494/0, merge=0/0, ticks=1179541/0, in_queue=1179541, util=98.24% 00:22:42.266 nvme6n1: ios=8812/0, merge=0/0, ticks=1164446/0, in_queue=1164446, util=98.37% 00:22:42.266 nvme7n1: ios=10056/0, merge=0/0, ticks=1166782/0, in_queue=1166782, util=98.84% 00:22:42.266 nvme8n1: ios=9538/0, merge=0/0, ticks=1236345/0, in_queue=1236345, util=99.06% 00:22:42.266 nvme9n1: ios=12058/0, merge=0/0, ticks=1230515/0, in_queue=1230515, util=99.22% 00:22:42.266 17:14:56 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:42.266 [global] 00:22:42.266 thread=1 00:22:42.266 invalidate=1 00:22:42.266 rw=randwrite 00:22:42.266 time_based=1 00:22:42.266 runtime=10 00:22:42.266 ioengine=libaio 00:22:42.266 direct=1 00:22:42.266 bs=262144 00:22:42.266 iodepth=64 00:22:42.266 norandommap=1 00:22:42.266 numjobs=1 00:22:42.266 00:22:42.266 [job0] 00:22:42.266 filename=/dev/nvme0n1 00:22:42.266 [job1] 00:22:42.266 filename=/dev/nvme10n1 00:22:42.266 [job2] 00:22:42.266 filename=/dev/nvme1n1 00:22:42.266 [job3] 00:22:42.266 filename=/dev/nvme2n1 00:22:42.266 [job4] 00:22:42.266 filename=/dev/nvme3n1 00:22:42.266 [job5] 00:22:42.266 filename=/dev/nvme4n1 00:22:42.266 [job6] 00:22:42.266 filename=/dev/nvme5n1 00:22:42.266 [job7] 00:22:42.266 filename=/dev/nvme6n1 00:22:42.266 [job8] 00:22:42.266 filename=/dev/nvme7n1 00:22:42.266 [job9] 00:22:42.266 filename=/dev/nvme8n1 00:22:42.266 [job10] 00:22:42.266 filename=/dev/nvme9n1 00:22:42.266 Could not set queue depth (nvme0n1) 00:22:42.266 Could not set queue depth (nvme10n1) 00:22:42.266 Could not set queue depth (nvme1n1) 00:22:42.266 Could not set queue depth (nvme2n1) 00:22:42.266 Could not set queue depth (nvme3n1) 00:22:42.266 Could not set queue depth (nvme4n1) 00:22:42.266 Could not set queue depth (nvme5n1) 00:22:42.266 Could not set queue depth (nvme6n1) 00:22:42.266 Could not set queue depth (nvme7n1) 00:22:42.266 Could not set queue depth (nvme8n1) 00:22:42.266 Could not set queue depth (nvme9n1) 00:22:42.266 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:42.266 fio-3.35 00:22:42.266 Starting 11 threads 00:22:52.238 00:22:52.238 job0: (groupid=0, jobs=1): err= 0: pid=589251: Sat Jul 20 17:15:08 2024 00:22:52.238 write: IOPS=61, BW=15.5MiB/s (16.2MB/s)(162MiB/10481msec); 0 zone resets 00:22:52.238 slat (usec): min=16, max=1457.2k, avg=15449.47, stdev=71155.42 00:22:52.238 clat (msec): min=197, max=3372, avg=1017.18, stdev=718.63 00:22:52.238 lat (msec): min=197, max=3372, avg=1032.63, stdev=726.76 00:22:52.238 clat percentiles (msec): 00:22:52.238 | 1.00th=[ 239], 5.00th=[ 447], 10.00th=[ 468], 20.00th=[ 523], 00:22:52.238 | 30.00th=[ 575], 40.00th=[ 592], 50.00th=[ 676], 60.00th=[ 735], 00:22:52.238 | 70.00th=[ 1020], 80.00th=[ 1821], 90.00th=[ 2072], 95.00th=[ 2232], 00:22:52.238 | 99.00th=[ 3339], 99.50th=[ 3373], 99.90th=[ 3373], 99.95th=[ 3373], 00:22:52.238 | 99.99th=[ 3373] 00:22:52.238 bw ( KiB/s): min= 2052, max=36864, per=3.03%, avg=17623.94, stdev=10445.32, samples=17 00:22:52.238 iops : min= 8, max= 144, avg=68.76, stdev=40.85, samples=17 00:22:52.238 lat (msec) : 250=1.39%, 500=15.10%, 750=44.99%, 1000=8.47%, 2000=17.26% 00:22:52.238 lat (msec) : >=2000=12.79% 00:22:52.238 cpu : usr=0.15%, sys=0.09%, ctx=209, majf=0, minf=1 00:22:52.238 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:22:52.239 issued rwts: total=0,649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job1: (groupid=0, jobs=1): err= 0: pid=589263: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=365, BW=91.3MiB/s (95.8MB/s)(932MiB/10200msec); 0 zone resets 00:22:52.239 slat (usec): min=16, max=238016, avg=2101.26, stdev=9201.09 00:22:52.239 clat (msec): min=7, max=821, avg=172.96, stdev=136.34 00:22:52.239 lat (msec): min=7, max=909, avg=175.06, stdev=137.86 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 19], 5.00th=[ 43], 10.00th=[ 64], 20.00th=[ 101], 00:22:52.239 | 30.00th=[ 109], 40.00th=[ 116], 50.00th=[ 127], 60.00th=[ 146], 00:22:52.239 | 70.00th=[ 159], 80.00th=[ 194], 90.00th=[ 401], 95.00th=[ 502], 00:22:52.239 | 99.00th=[ 642], 99.50th=[ 701], 99.90th=[ 743], 99.95th=[ 776], 00:22:52.239 | 99.99th=[ 818] 00:22:52.239 bw ( KiB/s): min=17408, max=167600, per=16.13%, avg=93863.90, stdev=42992.04, samples=20 00:22:52.239 iops : min= 68, max= 654, avg=366.35, stdev=167.74, samples=20 00:22:52.239 lat (msec) : 10=0.13%, 20=1.07%, 50=4.99%, 100=13.84%, 250=64.37% 00:22:52.239 lat (msec) : 500=10.28%, 750=5.26%, 1000=0.05% 00:22:52.239 cpu : usr=0.81%, sys=0.92%, ctx=1697, majf=0, minf=1 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.239 issued rwts: total=0,3727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job2: (groupid=0, jobs=1): err= 0: pid=589264: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=138, BW=34.7MiB/s (36.4MB/s)(364MiB/10507msec); 0 zone resets 00:22:52.239 slat (usec): min=25, max=1693.8k, avg=6677.02, stdev=51676.03 00:22:52.239 clat (msec): min=100, max=3084, avg=454.26, stdev=489.00 00:22:52.239 lat (msec): min=107, max=3084, avg=460.93, stdev=495.10 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 105], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 122], 00:22:52.239 | 30.00th=[ 148], 40.00th=[ 182], 50.00th=[ 288], 60.00th=[ 426], 00:22:52.239 | 70.00th=[ 506], 80.00th=[ 609], 90.00th=[ 785], 95.00th=[ 1804], 00:22:52.239 | 99.00th=[ 2089], 99.50th=[ 2198], 99.90th=[ 3071], 99.95th=[ 3071], 00:22:52.239 | 99.99th=[ 3071] 00:22:52.239 bw ( KiB/s): min= 2052, max=141595, per=7.67%, avg=44614.38, stdev=35935.20, samples=16 00:22:52.239 iops : min= 8, max= 553, avg=174.19, stdev=140.32, samples=16 00:22:52.239 lat (msec) : 250=46.40%, 500=22.51%, 750=18.12%, 1000=4.19%, 2000=7.21% 00:22:52.239 lat (msec) : >=2000=1.58% 00:22:52.239 cpu : usr=0.45%, sys=0.22%, ctx=488, majf=0, minf=1 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.239 issued rwts: total=0,1457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job3: (groupid=0, jobs=1): err= 0: pid=589265: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=174, BW=43.7MiB/s (45.8MB/s)(444MiB/10158msec); 0 zone resets 00:22:52.239 slat (usec): min=16, max=1443.2k, avg=2744.84, stdev=40546.01 00:22:52.239 clat (msec): min=7, max=2858, avg=363.56, stdev=493.20 00:22:52.239 lat (msec): min=7, max=2866, avg=366.31, stdev=497.61 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 51], 00:22:52.239 | 30.00th=[ 65], 40.00th=[ 101], 50.00th=[ 148], 60.00th=[ 259], 00:22:52.239 | 70.00th=[ 405], 80.00th=[ 575], 90.00th=[ 936], 95.00th=[ 1368], 00:22:52.239 | 99.00th=[ 2802], 99.50th=[ 2836], 99.90th=[ 2836], 99.95th=[ 2869], 00:22:52.239 | 99.99th=[ 2869] 00:22:52.239 bw ( KiB/s): min= 3584, max=107008, per=8.36%, avg=48668.44, stdev=34544.98, samples=18 00:22:52.239 iops : min= 14, max= 418, avg=190.11, stdev=134.94, samples=18 00:22:52.239 lat (msec) : 10=0.45%, 20=6.03%, 50=13.75%, 100=19.79%, 250=19.84% 00:22:52.239 lat (msec) : 500=14.94%, 750=13.30%, 1000=2.09%, 2000=8.68%, >=2000=1.13% 00:22:52.239 cpu : usr=0.42%, sys=0.51%, ctx=1461, majf=0, minf=1 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.239 issued rwts: total=0,1774,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job4: (groupid=0, jobs=1): err= 0: pid=589266: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=21, BW=5384KiB/s (5513kB/s)(53.5MiB/10176msec); 0 zone resets 00:22:52.239 slat (usec): min=24, max=2787.8k, avg=30526.98, stdev=234595.42 00:22:52.239 clat (msec): min=140, max=9830, avg=3009.30, stdev=3683.26 00:22:52.239 lat (msec): min=140, max=9836, avg=3039.83, stdev=3692.88 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 146], 5.00th=[ 186], 10.00th=[ 222], 20.00th=[ 271], 00:22:52.239 | 30.00th=[ 305], 40.00th=[ 368], 50.00th=[ 518], 60.00th=[ 1905], 00:22:52.239 | 70.00th=[ 2433], 80.00th=[ 8020], 90.00th=[ 9597], 95.00th=[ 9731], 00:22:52.239 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:22:52.239 | 99.99th=[ 9866] 00:22:52.239 bw ( KiB/s): min= 512, max=27136, per=1.11%, avg=6445.00, stdev=7146.95, samples=12 00:22:52.239 iops : min= 2, max= 106, avg=25.17, stdev=27.91, samples=12 00:22:52.239 lat (msec) : 250=16.36%, 500=30.84%, 750=7.01%, 2000=11.68%, >=2000=34.11% 00:22:52.239 cpu : usr=0.06%, sys=0.04%, ctx=150, majf=0, minf=1 00:22:52.239 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.5%, 32=15.0%, >=64=70.6% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.7%, >=64=0.0% 00:22:52.239 issued rwts: total=0,214,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job5: (groupid=0, jobs=1): err= 0: pid=589271: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=409, BW=102MiB/s (107MB/s)(1046MiB/10217msec); 0 zone resets 00:22:52.239 slat (usec): min=15, max=440419, avg=1888.32, stdev=10213.70 00:22:52.239 clat (msec): min=3, max=1049, avg=154.31, stdev=114.46 00:22:52.239 lat (msec): min=3, max=1049, avg=156.20, stdev=116.06 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 19], 5.00th=[ 47], 10.00th=[ 59], 20.00th=[ 95], 00:22:52.239 | 30.00th=[ 114], 40.00th=[ 124], 50.00th=[ 134], 60.00th=[ 144], 00:22:52.239 | 70.00th=[ 157], 80.00th=[ 171], 90.00th=[ 205], 95.00th=[ 422], 00:22:52.239 | 99.00th=[ 667], 99.50th=[ 701], 99.90th=[ 743], 99.95th=[ 743], 00:22:52.239 | 99.99th=[ 1053] 00:22:52.239 bw ( KiB/s): min= 6656, max=194560, per=18.14%, avg=105567.60, stdev=45837.93, samples=20 00:22:52.239 iops : min= 26, max= 760, avg=412.10, stdev=179.02, samples=20 00:22:52.239 lat (msec) : 4=0.02%, 10=0.33%, 20=0.84%, 50=5.74%, 100=14.20% 00:22:52.239 lat (msec) : 250=70.53%, 500=5.07%, 750=3.25%, 2000=0.02% 00:22:52.239 cpu : usr=1.04%, sys=0.87%, ctx=2072, majf=0, minf=1 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.239 issued rwts: total=0,4184,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job6: (groupid=0, jobs=1): err= 0: pid=589272: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=388, BW=97.0MiB/s (102MB/s)(1017MiB/10476msec); 0 zone resets 00:22:52.239 slat (usec): min=25, max=252469, avg=2426.28, stdev=6341.76 00:22:52.239 clat (msec): min=22, max=838, avg=162.00, stdev=106.24 00:22:52.239 lat (msec): min=22, max=838, avg=164.43, stdev=106.92 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 93], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 107], 00:22:52.239 | 30.00th=[ 111], 40.00th=[ 116], 50.00th=[ 124], 60.00th=[ 133], 00:22:52.239 | 70.00th=[ 144], 80.00th=[ 192], 90.00th=[ 292], 95.00th=[ 368], 00:22:52.239 | 99.00th=[ 751], 99.50th=[ 802], 99.90th=[ 835], 99.95th=[ 835], 00:22:52.239 | 99.99th=[ 835] 00:22:52.239 bw ( KiB/s): min=43008, max=155648, per=17.60%, avg=102451.20, stdev=38845.96, samples=20 00:22:52.239 iops : min= 168, max= 608, avg=400.20, stdev=151.74, samples=20 00:22:52.239 lat (msec) : 50=0.10%, 100=6.00%, 250=81.16%, 500=11.21%, 750=0.54% 00:22:52.239 lat (msec) : 1000=0.98% 00:22:52.239 cpu : usr=1.03%, sys=0.98%, ctx=1105, majf=0, minf=1 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.239 issued rwts: total=0,4066,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job7: (groupid=0, jobs=1): err= 0: pid=589273: Sat Jul 20 17:15:08 2024 00:22:52.239 write: IOPS=155, BW=38.8MiB/s (40.7MB/s)(411MiB/10585msec); 0 zone resets 00:22:52.239 slat (usec): min=21, max=1415.0k, avg=3453.60, stdev=39895.06 00:22:52.239 clat (msec): min=8, max=2967, avg=408.45, stdev=541.42 00:22:52.239 lat (msec): min=10, max=3067, avg=411.91, stdev=547.04 00:22:52.239 clat percentiles (msec): 00:22:52.239 | 1.00th=[ 25], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 80], 00:22:52.239 | 30.00th=[ 109], 40.00th=[ 169], 50.00th=[ 213], 60.00th=[ 243], 00:22:52.239 | 70.00th=[ 321], 80.00th=[ 592], 90.00th=[ 1217], 95.00th=[ 1754], 00:22:52.239 | 99.00th=[ 2433], 99.50th=[ 2769], 99.90th=[ 2869], 99.95th=[ 2970], 00:22:52.239 | 99.99th=[ 2970] 00:22:52.239 bw ( KiB/s): min=10240, max=78848, per=8.17%, avg=47579.65, stdev=21826.10, samples=17 00:22:52.239 iops : min= 40, max= 308, avg=185.76, stdev=85.28, samples=17 00:22:52.239 lat (msec) : 10=0.06%, 20=0.67%, 50=11.81%, 100=15.03%, 250=34.39% 00:22:52.239 lat (msec) : 500=16.49%, 750=5.17%, 1000=5.23%, 2000=8.76%, >=2000=2.37% 00:22:52.239 cpu : usr=0.28%, sys=0.45%, ctx=1166, majf=0, minf=1 00:22:52.239 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:22:52.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.239 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.239 issued rwts: total=0,1643,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.239 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.239 job8: (groupid=0, jobs=1): err= 0: pid=589274: Sat Jul 20 17:15:08 2024 00:22:52.240 write: IOPS=100, BW=25.1MiB/s (26.3MB/s)(253MiB/10083msec); 0 zone resets 00:22:52.240 slat (usec): min=16, max=7576.8k, avg=7742.57, stdev=238290.28 00:22:52.240 clat (msec): min=3, max=7732, avg=630.33, stdev=1870.41 00:22:52.240 lat (msec): min=3, max=7733, avg=638.08, stdev=1883.55 00:22:52.240 clat percentiles (msec): 00:22:52.240 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:22:52.240 | 30.00th=[ 19], 40.00th=[ 25], 50.00th=[ 35], 60.00th=[ 62], 00:22:52.240 | 70.00th=[ 114], 80.00th=[ 146], 90.00th=[ 1787], 95.00th=[ 7684], 00:22:52.240 | 99.00th=[ 7752], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752], 00:22:52.240 | 99.99th=[ 7752] 00:22:52.240 bw ( KiB/s): min=42581, max=130308, per=13.91%, avg=80947.33, stdev=34300.73, samples=6 00:22:52.240 iops : min= 166, max= 509, avg=315.83, stdev=133.95, samples=6 00:22:52.240 lat (msec) : 4=0.10%, 10=0.49%, 20=32.44%, 50=25.02%, 100=9.79% 00:22:52.240 lat (msec) : 250=20.28%, 2000=5.64%, >=2000=6.23% 00:22:52.240 cpu : usr=0.22%, sys=0.36%, ctx=944, majf=0, minf=1 00:22:52.240 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:22:52.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.240 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.240 issued rwts: total=0,1011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.240 job9: (groupid=0, jobs=1): err= 0: pid=589275: Sat Jul 20 17:15:08 2024 00:22:52.240 write: IOPS=258, BW=64.6MiB/s (67.7MB/s)(676MiB/10464msec); 0 zone resets 00:22:52.240 slat (usec): min=22, max=1152.8k, avg=2790.49, stdev=23368.24 00:22:52.240 clat (msec): min=4, max=1605, avg=244.68, stdev=306.26 00:22:52.240 lat (msec): min=4, max=1605, avg=247.47, stdev=308.11 00:22:52.240 clat percentiles (msec): 00:22:52.240 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 49], 20.00th=[ 55], 00:22:52.240 | 30.00th=[ 91], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 213], 00:22:52.240 | 70.00th=[ 275], 80.00th=[ 351], 90.00th=[ 502], 95.00th=[ 835], 00:22:52.240 | 99.00th=[ 1586], 99.50th=[ 1603], 99.90th=[ 1603], 99.95th=[ 1603], 00:22:52.240 | 99.99th=[ 1603] 00:22:52.240 bw ( KiB/s): min= 2052, max=168111, per=12.22%, avg=71142.53, stdev=50516.63, samples=19 00:22:52.240 iops : min= 8, max= 656, avg=277.74, stdev=197.16, samples=19 00:22:52.240 lat (msec) : 10=3.63%, 20=0.89%, 50=7.59%, 100=28.02%, 250=25.91% 00:22:52.240 lat (msec) : 500=23.87%, 750=3.26%, 1000=3.18%, 2000=3.66% 00:22:52.240 cpu : usr=0.85%, sys=0.61%, ctx=1393, majf=0, minf=1 00:22:52.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:52.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.240 issued rwts: total=0,2702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.240 job10: (groupid=0, jobs=1): err= 0: pid=589276: Sat Jul 20 17:15:08 2024 00:22:52.240 write: IOPS=257, BW=64.3MiB/s (67.5MB/s)(660MiB/10251msec); 0 zone resets 00:22:52.240 slat (usec): min=19, max=1663.5k, avg=3415.07, stdev=45414.14 00:22:52.240 clat (msec): min=3, max=1850, avg=244.99, stdev=346.97 00:22:52.240 lat (msec): min=5, max=1901, avg=248.40, stdev=350.26 00:22:52.240 clat percentiles (msec): 00:22:52.240 | 1.00th=[ 22], 5.00th=[ 60], 10.00th=[ 86], 20.00th=[ 107], 00:22:52.240 | 30.00th=[ 112], 40.00th=[ 124], 50.00th=[ 140], 60.00th=[ 157], 00:22:52.240 | 70.00th=[ 182], 80.00th=[ 218], 90.00th=[ 435], 95.00th=[ 1053], 00:22:52.240 | 99.00th=[ 1770], 99.50th=[ 1804], 99.90th=[ 1838], 99.95th=[ 1838], 00:22:52.240 | 99.99th=[ 1854] 00:22:52.240 bw ( KiB/s): min= 1026, max=162304, per=14.15%, avg=82362.69, stdev=46955.34, samples=16 00:22:52.240 iops : min= 4, max= 634, avg=321.62, stdev=183.44, samples=16 00:22:52.240 lat (msec) : 4=0.04%, 10=0.11%, 20=0.68%, 50=2.65%, 100=10.73% 00:22:52.240 lat (msec) : 250=69.45%, 500=9.17%, 1000=1.25%, 2000=5.91% 00:22:52.240 cpu : usr=0.62%, sys=0.49%, ctx=1070, majf=0, minf=1 00:22:52.240 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:22:52.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:52.240 issued rwts: total=0,2638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.240 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:52.240 00:22:52.240 Run status group 0 (all jobs): 00:22:52.240 WRITE: bw=568MiB/s (596MB/s), 5384KiB/s-102MiB/s (5513kB/s-107MB/s), io=6016MiB (6308MB), run=10083-10585msec 00:22:52.240 00:22:52.240 Disk stats (read/write): 00:22:52.240 nvme0n1: ios=49/1242, merge=0/0, ticks=455/1234938, in_queue=1235393, util=98.16% 00:22:52.240 nvme10n1: ios=49/7327, merge=0/0, ticks=231/1221769, in_queue=1222000, util=97.53% 00:22:52.240 nvme1n1: ios=40/2909, merge=0/0, ticks=1026/1256963, in_queue=1257989, util=100.00% 00:22:52.240 nvme2n1: ios=0/3419, merge=0/0, ticks=0/1250435, in_queue=1250435, util=96.61% 00:22:52.240 nvme3n1: ios=39/301, merge=0/0, ticks=863/1162045, in_queue=1162908, util=100.00% 00:22:52.240 nvme4n1: ios=0/8248, merge=0/0, ticks=0/1208873, in_queue=1208873, util=97.46% 00:22:52.240 nvme5n1: ios=37/8075, merge=0/0, ticks=768/1216825, in_queue=1217593, util=99.98% 00:22:52.240 nvme6n1: ios=0/3207, merge=0/0, ticks=0/1238136, in_queue=1238136, util=98.11% 00:22:52.240 nvme7n1: ios=0/1900, merge=0/0, ticks=0/1254710, in_queue=1254710, util=98.76% 00:22:52.240 nvme8n1: ios=36/5352, merge=0/0, ticks=1112/1243576, in_queue=1244688, util=100.00% 00:22:52.240 nvme9n1: ios=38/5148, merge=0/0, ticks=6594/910750, in_queue=917344, util=100.00% 00:22:52.240 17:15:08 -- target/multiconnection.sh@36 -- # sync 00:22:52.240 17:15:08 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:52.240 17:15:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.240 17:15:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:52.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:52.807 17:15:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:52.807 17:15:08 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.807 17:15:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.807 17:15:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:52.807 17:15:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.807 17:15:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:52.807 17:15:08 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.807 17:15:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.807 17:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.807 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:22:52.807 17:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.807 17:15:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.807 17:15:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:52.807 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:52.807 17:15:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:52.807 17:15:08 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.807 17:15:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.807 17:15:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:52.807 17:15:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.807 17:15:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:52.807 17:15:08 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.807 17:15:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:52.807 17:15:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.807 17:15:08 -- common/autotest_common.sh@10 -- # set +x 00:22:52.807 17:15:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.807 17:15:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.807 17:15:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:53.064 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:53.064 17:15:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:53.064 17:15:09 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.064 17:15:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.064 17:15:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:53.064 17:15:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.064 17:15:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:53.064 17:15:09 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.064 17:15:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:53.064 17:15:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.064 17:15:09 -- common/autotest_common.sh@10 -- # set +x 00:22:53.064 17:15:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.064 17:15:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.064 17:15:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:53.628 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:53.628 17:15:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:53.628 17:15:09 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.628 17:15:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.628 17:15:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:53.628 17:15:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.628 17:15:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:53.628 17:15:09 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.628 17:15:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:53.628 17:15:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.628 17:15:09 -- common/autotest_common.sh@10 -- # set +x 00:22:53.628 17:15:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.628 17:15:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.628 17:15:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:53.628 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:53.628 17:15:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:53.628 17:15:09 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.628 17:15:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.628 17:15:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:53.628 17:15:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.628 17:15:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:53.628 17:15:09 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.628 17:15:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:53.628 17:15:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.628 17:15:09 -- common/autotest_common.sh@10 -- # set +x 00:22:53.628 17:15:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.628 17:15:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.628 17:15:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:53.885 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:53.885 17:15:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:53.885 17:15:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:53.885 17:15:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:53.885 17:15:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:53.885 17:15:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:53.885 17:15:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:53.885 17:15:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:53.885 17:15:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:53.885 17:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:53.885 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.885 17:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:53.885 17:15:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:53.885 17:15:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:54.142 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:54.142 17:15:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:54.142 17:15:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.142 17:15:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.142 17:15:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:54.142 17:15:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.142 17:15:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:54.142 17:15:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.142 17:15:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:54.142 17:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.142 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.142 17:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.142 17:15:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.142 17:15:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:54.142 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:54.142 17:15:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:54.142 17:15:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.142 17:15:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.142 17:15:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:54.142 17:15:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.142 17:15:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:54.399 17:15:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.399 17:15:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:54.399 17:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.399 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.399 17:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.399 17:15:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.399 17:15:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:54.399 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:54.399 17:15:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:54.399 17:15:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.399 17:15:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.399 17:15:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:54.399 17:15:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.399 17:15:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:54.399 17:15:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.399 17:15:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:54.399 17:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.399 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.399 17:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.399 17:15:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.399 17:15:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:54.399 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:54.399 17:15:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:54.399 17:15:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.399 17:15:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.399 17:15:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:54.656 17:15:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.656 17:15:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:54.656 17:15:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.656 17:15:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:54.656 17:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.656 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.656 17:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.656 17:15:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.656 17:15:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:54.656 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:54.656 17:15:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:54.656 17:15:10 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.656 17:15:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.656 17:15:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:54.656 17:15:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.656 17:15:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:54.656 17:15:10 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.656 17:15:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:54.656 17:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.656 17:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:54.656 17:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.656 17:15:10 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:54.656 17:15:10 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:54.656 17:15:10 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:54.656 17:15:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:54.656 17:15:10 -- nvmf/common.sh@116 -- # sync 00:22:54.656 17:15:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:54.656 17:15:10 -- nvmf/common.sh@119 -- # set +e 00:22:54.656 17:15:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:54.656 17:15:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:54.656 rmmod nvme_tcp 00:22:54.656 rmmod nvme_fabrics 00:22:54.656 rmmod nvme_keyring 00:22:54.656 17:15:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:54.656 17:15:10 -- nvmf/common.sh@123 -- # set -e 00:22:54.656 17:15:10 -- nvmf/common.sh@124 -- # return 0 00:22:54.656 17:15:10 -- nvmf/common.sh@477 -- # '[' -n 583839 ']' 00:22:54.656 17:15:10 -- nvmf/common.sh@478 -- # killprocess 583839 00:22:54.656 17:15:10 -- common/autotest_common.sh@926 -- # '[' -z 583839 ']' 00:22:54.656 17:15:10 -- common/autotest_common.sh@930 -- # kill -0 583839 00:22:54.656 17:15:10 -- common/autotest_common.sh@931 -- # uname 00:22:54.656 17:15:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:54.656 17:15:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 583839 00:22:54.656 17:15:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:54.656 17:15:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:54.656 17:15:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 583839' 00:22:54.656 killing process with pid 583839 00:22:54.656 17:15:10 -- common/autotest_common.sh@945 -- # kill 583839 00:22:54.656 17:15:10 -- common/autotest_common.sh@950 -- # wait 583839 00:22:55.221 17:15:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:55.221 17:15:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:55.221 17:15:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:55.221 17:15:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.221 17:15:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:55.221 17:15:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.221 17:15:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.221 17:15:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.750 17:15:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:57.750 00:22:57.750 real 1m0.954s 00:22:57.750 user 3m19.245s 00:22:57.750 sys 0m17.728s 00:22:57.750 17:15:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:57.750 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:22:57.750 ************************************ 00:22:57.750 END TEST nvmf_multiconnection 00:22:57.750 ************************************ 00:22:57.750 17:15:13 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:57.750 17:15:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:57.750 17:15:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:57.750 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:22:57.750 ************************************ 00:22:57.750 START TEST nvmf_initiator_timeout 00:22:57.750 ************************************ 00:22:57.750 17:15:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:57.750 * Looking for test storage... 00:22:57.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.750 17:15:13 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.750 17:15:13 -- nvmf/common.sh@7 -- # uname -s 00:22:57.750 17:15:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.750 17:15:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.750 17:15:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.750 17:15:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.750 17:15:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.750 17:15:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.750 17:15:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.750 17:15:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.750 17:15:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.750 17:15:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.750 17:15:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.750 17:15:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:57.750 17:15:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.750 17:15:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.750 17:15:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.750 17:15:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.750 17:15:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.750 17:15:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.750 17:15:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.750 17:15:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.750 17:15:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.750 17:15:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.750 17:15:13 -- paths/export.sh@5 -- # export PATH 00:22:57.750 17:15:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.750 17:15:13 -- nvmf/common.sh@46 -- # : 0 00:22:57.750 17:15:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:57.750 17:15:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:57.750 17:15:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:57.750 17:15:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.750 17:15:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.750 17:15:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:57.750 17:15:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:57.750 17:15:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:57.750 17:15:13 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:57.750 17:15:13 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:57.750 17:15:13 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:57.750 17:15:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:57.750 17:15:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.750 17:15:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:57.750 17:15:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:57.750 17:15:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:57.750 17:15:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.750 17:15:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.750 17:15:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.750 17:15:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:57.750 17:15:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:57.750 17:15:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:57.750 17:15:13 -- common/autotest_common.sh@10 -- # set +x 00:22:59.125 17:15:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:59.125 17:15:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:59.125 17:15:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:59.125 17:15:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:59.125 17:15:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:59.125 17:15:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:59.125 17:15:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:59.125 17:15:15 -- nvmf/common.sh@294 -- # net_devs=() 00:22:59.125 17:15:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:59.125 17:15:15 -- nvmf/common.sh@295 -- # e810=() 00:22:59.125 17:15:15 -- nvmf/common.sh@295 -- # local -ga e810 00:22:59.125 17:15:15 -- nvmf/common.sh@296 -- # x722=() 00:22:59.125 17:15:15 -- nvmf/common.sh@296 -- # local -ga x722 00:22:59.125 17:15:15 -- nvmf/common.sh@297 -- # mlx=() 00:22:59.125 17:15:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:59.125 17:15:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.125 17:15:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:59.125 17:15:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:59.125 17:15:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:59.125 17:15:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:59.125 17:15:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:59.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:59.125 17:15:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:59.125 17:15:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:59.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:59.125 17:15:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:59.125 17:15:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:59.125 17:15:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.125 17:15:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:59.125 17:15:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.125 17:15:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:59.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:59.125 17:15:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.125 17:15:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:59.125 17:15:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.125 17:15:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:59.125 17:15:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.125 17:15:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:59.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:59.125 17:15:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.125 17:15:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:59.125 17:15:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:59.125 17:15:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:59.125 17:15:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:59.125 17:15:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.125 17:15:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.125 17:15:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.125 17:15:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:59.125 17:15:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.125 17:15:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.125 17:15:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:59.125 17:15:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.125 17:15:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.125 17:15:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:59.125 17:15:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:59.125 17:15:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.125 17:15:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.125 17:15:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.125 17:15:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.125 17:15:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:59.125 17:15:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.384 17:15:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.384 17:15:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.384 17:15:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:59.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:59.384 00:22:59.384 --- 10.0.0.2 ping statistics --- 00:22:59.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.384 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:59.384 17:15:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:22:59.384 00:22:59.384 --- 10.0.0.1 ping statistics --- 00:22:59.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.384 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:22:59.384 17:15:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.384 17:15:15 -- nvmf/common.sh@410 -- # return 0 00:22:59.384 17:15:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:59.384 17:15:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.384 17:15:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:59.384 17:15:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:59.384 17:15:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.384 17:15:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:59.384 17:15:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:59.384 17:15:15 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:59.384 17:15:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:59.384 17:15:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:59.384 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:59.384 17:15:15 -- nvmf/common.sh@469 -- # nvmfpid=592815 00:22:59.384 17:15:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:59.384 17:15:15 -- nvmf/common.sh@470 -- # waitforlisten 592815 00:22:59.384 17:15:15 -- common/autotest_common.sh@819 -- # '[' -z 592815 ']' 00:22:59.384 17:15:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.384 17:15:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:59.384 17:15:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.384 17:15:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:59.384 17:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:59.384 [2024-07-20 17:15:15.406020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:59.384 [2024-07-20 17:15:15.406107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.384 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.384 [2024-07-20 17:15:15.479236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.643 [2024-07-20 17:15:15.570139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:59.643 [2024-07-20 17:15:15.570309] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.643 [2024-07-20 17:15:15.570330] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.643 [2024-07-20 17:15:15.570346] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.643 [2024-07-20 17:15:15.570424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.643 [2024-07-20 17:15:15.570494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.643 [2024-07-20 17:15:15.570584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.643 [2024-07-20 17:15:15.570587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.208 17:15:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:00.208 17:15:16 -- common/autotest_common.sh@852 -- # return 0 00:23:00.208 17:15:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:00.208 17:15:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:00.208 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.208 17:15:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.209 17:15:16 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:00.209 17:15:16 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:00.209 17:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.209 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.468 Malloc0 00:23:00.468 17:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.468 17:15:16 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:00.468 17:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.468 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.468 Delay0 00:23:00.468 17:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.468 17:15:16 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.468 17:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.468 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.468 [2024-07-20 17:15:16.396243] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.468 17:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.468 17:15:16 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:00.468 17:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.468 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.468 17:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.468 17:15:16 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:00.468 17:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.468 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.468 17:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.468 17:15:16 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:00.468 17:15:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.468 17:15:16 -- common/autotest_common.sh@10 -- # set +x 00:23:00.468 [2024-07-20 17:15:16.424505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.468 17:15:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.468 17:15:16 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:01.035 17:15:16 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:01.035 17:15:16 -- common/autotest_common.sh@1177 -- # local i=0 00:23:01.035 17:15:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:01.035 17:15:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:01.035 17:15:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:02.937 17:15:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:02.937 17:15:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:02.937 17:15:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:02.937 17:15:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:02.937 17:15:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:02.937 17:15:19 -- common/autotest_common.sh@1187 -- # return 0 00:23:02.937 17:15:19 -- target/initiator_timeout.sh@35 -- # fio_pid=593258 00:23:02.937 17:15:19 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:02.937 17:15:19 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:02.937 [global] 00:23:02.937 thread=1 00:23:02.937 invalidate=1 00:23:02.937 rw=write 00:23:02.937 time_based=1 00:23:02.937 runtime=60 00:23:02.937 ioengine=libaio 00:23:02.937 direct=1 00:23:02.937 bs=4096 00:23:02.937 iodepth=1 00:23:02.937 norandommap=0 00:23:02.937 numjobs=1 00:23:02.937 00:23:02.937 verify_dump=1 00:23:02.937 verify_backlog=512 00:23:02.937 verify_state_save=0 00:23:02.937 do_verify=1 00:23:02.937 verify=crc32c-intel 00:23:02.937 [job0] 00:23:02.937 filename=/dev/nvme0n1 00:23:02.937 Could not set queue depth (nvme0n1) 00:23:03.195 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:03.195 fio-3.35 00:23:03.195 Starting 1 thread 00:23:06.467 17:15:22 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:06.467 17:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.467 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:06.467 true 00:23:06.467 17:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.467 17:15:22 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:06.467 17:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.467 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:06.467 true 00:23:06.467 17:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.467 17:15:22 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:06.467 17:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.467 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:06.467 true 00:23:06.467 17:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.467 17:15:22 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:06.467 17:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.467 17:15:22 -- common/autotest_common.sh@10 -- # set +x 00:23:06.467 true 00:23:06.467 17:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.467 17:15:22 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:08.998 17:15:25 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:08.998 17:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.998 17:15:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.998 true 00:23:08.998 17:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.998 17:15:25 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:08.998 17:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.998 17:15:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.998 true 00:23:08.998 17:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.998 17:15:25 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:08.998 17:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.998 17:15:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.998 true 00:23:08.998 17:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.998 17:15:25 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:08.998 17:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.998 17:15:25 -- common/autotest_common.sh@10 -- # set +x 00:23:08.998 true 00:23:08.998 17:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.998 17:15:25 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:08.998 17:15:25 -- target/initiator_timeout.sh@54 -- # wait 593258 00:24:05.196 00:24:05.196 job0: (groupid=0, jobs=1): err= 0: pid=593456: Sat Jul 20 17:16:19 2024 00:24:05.196 read: IOPS=32, BW=131KiB/s (134kB/s)(7844KiB/60019msec) 00:24:05.196 slat (usec): min=6, max=13124, avg=35.91, stdev=387.67 00:24:05.196 clat (usec): min=603, max=41335k, avg=30123.67, stdev=933364.07 00:24:05.196 lat (usec): min=616, max=41335k, avg=30159.58, stdev=933364.00 00:24:05.196 clat percentiles (usec): 00:24:05.196 | 1.00th=[ 619], 5.00th=[ 627], 10.00th=[ 635], 00:24:05.196 | 20.00th=[ 676], 30.00th=[ 734], 40.00th=[ 758], 00:24:05.196 | 50.00th=[ 791], 60.00th=[ 824], 70.00th=[ 865], 00:24:05.196 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:24:05.196 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 43779], 00:24:05.196 | 99.95th=[17112761], 99.99th=[17112761] 00:24:05.196 write: IOPS=34, BW=136KiB/s (140kB/s)(8192KiB/60019msec); 0 zone resets 00:24:05.196 slat (usec): min=7, max=26753, avg=34.39, stdev=590.84 00:24:05.196 clat (usec): min=290, max=2026, avg=378.97, stdev=64.91 00:24:05.196 lat (usec): min=298, max=27226, avg=413.36, stdev=597.15 00:24:05.196 clat percentiles (usec): 00:24:05.196 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 326], 00:24:05.196 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 371], 60.00th=[ 392], 00:24:05.196 | 70.00th=[ 404], 80.00th=[ 433], 90.00th=[ 457], 95.00th=[ 469], 00:24:05.196 | 99.00th=[ 498], 99.50th=[ 515], 99.90th=[ 553], 99.95th=[ 553], 00:24:05.196 | 99.99th=[ 2024] 00:24:05.196 bw ( KiB/s): min= 1872, max= 4096, per=100.00%, avg=3276.80, stdev=1128.62, samples=5 00:24:05.196 iops : min= 468, max= 1024, avg=819.20, stdev=282.15, samples=5 00:24:05.196 lat (usec) : 500=50.59%, 750=17.91%, 1000=21.30% 00:24:05.196 lat (msec) : 2=0.07%, 4=0.02%, 50=10.08%, >=2000=0.02% 00:24:05.196 cpu : usr=0.15%, sys=0.16%, ctx=4014, majf=0, minf=2 00:24:05.196 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:05.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.196 issued rwts: total=1961,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.196 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:05.196 00:24:05.196 Run status group 0 (all jobs): 00:24:05.196 READ: bw=131KiB/s (134kB/s), 131KiB/s-131KiB/s (134kB/s-134kB/s), io=7844KiB (8032kB), run=60019-60019msec 00:24:05.196 WRITE: bw=136KiB/s (140kB/s), 136KiB/s-136KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60019-60019msec 00:24:05.196 00:24:05.196 Disk stats (read/write): 00:24:05.196 nvme0n1: ios=2010/2048, merge=0/0, ticks=18966/704, in_queue=19670, util=99.76% 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:05.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:05.196 17:16:19 -- common/autotest_common.sh@1198 -- # local i=0 00:24:05.196 17:16:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:05.196 17:16:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.196 17:16:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:05.196 17:16:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:05.196 17:16:19 -- common/autotest_common.sh@1210 -- # return 0 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:05.196 nvmf hotplug test: fio successful as expected 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.196 17:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:05.196 17:16:19 -- common/autotest_common.sh@10 -- # set +x 00:24:05.196 17:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:05.196 17:16:19 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:05.196 17:16:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:05.196 17:16:19 -- nvmf/common.sh@116 -- # sync 00:24:05.196 17:16:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:05.196 17:16:19 -- nvmf/common.sh@119 -- # set +e 00:24:05.196 17:16:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:05.196 17:16:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:05.196 rmmod nvme_tcp 00:24:05.196 rmmod nvme_fabrics 00:24:05.196 rmmod nvme_keyring 00:24:05.196 17:16:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:05.196 17:16:19 -- nvmf/common.sh@123 -- # set -e 00:24:05.196 17:16:19 -- nvmf/common.sh@124 -- # return 0 00:24:05.196 17:16:19 -- nvmf/common.sh@477 -- # '[' -n 592815 ']' 00:24:05.196 17:16:19 -- nvmf/common.sh@478 -- # killprocess 592815 00:24:05.196 17:16:19 -- common/autotest_common.sh@926 -- # '[' -z 592815 ']' 00:24:05.196 17:16:19 -- common/autotest_common.sh@930 -- # kill -0 592815 00:24:05.196 17:16:19 -- common/autotest_common.sh@931 -- # uname 00:24:05.196 17:16:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:05.196 17:16:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 592815 00:24:05.196 17:16:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:05.196 17:16:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:05.196 17:16:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 592815' 00:24:05.196 killing process with pid 592815 00:24:05.196 17:16:19 -- common/autotest_common.sh@945 -- # kill 592815 00:24:05.196 17:16:19 -- common/autotest_common.sh@950 -- # wait 592815 00:24:05.196 17:16:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:05.196 17:16:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:05.196 17:16:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:05.196 17:16:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.196 17:16:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:05.196 17:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.196 17:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.196 17:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.795 17:16:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:05.795 00:24:05.795 real 1m8.517s 00:24:05.795 user 4m12.812s 00:24:05.795 sys 0m6.617s 00:24:05.795 17:16:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.795 17:16:21 -- common/autotest_common.sh@10 -- # set +x 00:24:05.795 ************************************ 00:24:05.795 END TEST nvmf_initiator_timeout 00:24:05.795 ************************************ 00:24:05.795 17:16:21 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:05.795 17:16:21 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:05.795 17:16:21 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:05.795 17:16:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:05.795 17:16:21 -- common/autotest_common.sh@10 -- # set +x 00:24:07.695 17:16:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:07.695 17:16:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:07.695 17:16:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:07.695 17:16:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:07.695 17:16:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:07.695 17:16:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:07.695 17:16:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:07.695 17:16:23 -- nvmf/common.sh@294 -- # net_devs=() 00:24:07.695 17:16:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:07.695 17:16:23 -- nvmf/common.sh@295 -- # e810=() 00:24:07.695 17:16:23 -- nvmf/common.sh@295 -- # local -ga e810 00:24:07.695 17:16:23 -- nvmf/common.sh@296 -- # x722=() 00:24:07.695 17:16:23 -- nvmf/common.sh@296 -- # local -ga x722 00:24:07.695 17:16:23 -- nvmf/common.sh@297 -- # mlx=() 00:24:07.695 17:16:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:07.695 17:16:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.695 17:16:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:07.695 17:16:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:07.695 17:16:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:07.695 17:16:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.695 17:16:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:07.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:07.695 17:16:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:07.695 17:16:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:07.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:07.695 17:16:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:07.695 17:16:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:07.695 17:16:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.695 17:16:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.695 17:16:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.695 17:16:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.695 17:16:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:07.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:07.695 17:16:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.695 17:16:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:07.695 17:16:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.695 17:16:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:07.695 17:16:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.695 17:16:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:07.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:07.695 17:16:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.695 17:16:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:07.695 17:16:23 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.695 17:16:23 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:07.695 17:16:23 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:07.695 17:16:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:07.695 17:16:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:07.695 17:16:23 -- common/autotest_common.sh@10 -- # set +x 00:24:07.695 ************************************ 00:24:07.695 START TEST nvmf_perf_adq 00:24:07.695 ************************************ 00:24:07.695 17:16:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:07.954 * Looking for test storage... 00:24:07.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:07.954 17:16:23 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.954 17:16:23 -- nvmf/common.sh@7 -- # uname -s 00:24:07.954 17:16:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.954 17:16:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.954 17:16:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.954 17:16:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.954 17:16:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.954 17:16:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.954 17:16:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.954 17:16:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.954 17:16:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.954 17:16:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.954 17:16:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.954 17:16:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.954 17:16:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.954 17:16:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.954 17:16:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.954 17:16:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.954 17:16:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.954 17:16:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.954 17:16:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.954 17:16:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.954 17:16:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.954 17:16:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.954 17:16:23 -- paths/export.sh@5 -- # export PATH 00:24:07.954 17:16:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.954 17:16:23 -- nvmf/common.sh@46 -- # : 0 00:24:07.954 17:16:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:07.954 17:16:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:07.954 17:16:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:07.954 17:16:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.954 17:16:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.954 17:16:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:07.954 17:16:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:07.954 17:16:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:07.954 17:16:23 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:07.954 17:16:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:07.954 17:16:23 -- common/autotest_common.sh@10 -- # set +x 00:24:09.854 17:16:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:09.854 17:16:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:09.854 17:16:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:09.854 17:16:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:09.854 17:16:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:09.854 17:16:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:09.854 17:16:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:09.854 17:16:25 -- nvmf/common.sh@294 -- # net_devs=() 00:24:09.854 17:16:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:09.854 17:16:25 -- nvmf/common.sh@295 -- # e810=() 00:24:09.854 17:16:25 -- nvmf/common.sh@295 -- # local -ga e810 00:24:09.854 17:16:25 -- nvmf/common.sh@296 -- # x722=() 00:24:09.854 17:16:25 -- nvmf/common.sh@296 -- # local -ga x722 00:24:09.854 17:16:25 -- nvmf/common.sh@297 -- # mlx=() 00:24:09.854 17:16:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:09.854 17:16:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.854 17:16:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:09.854 17:16:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:09.854 17:16:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:09.854 17:16:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.854 17:16:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.854 17:16:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.854 17:16:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.854 17:16:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:09.854 17:16:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:09.854 17:16:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.854 17:16:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.854 17:16:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.854 17:16:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.854 17:16:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.854 17:16:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.854 17:16:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.854 17:16:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.854 17:16:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.854 17:16:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.854 17:16:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.854 17:16:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.854 17:16:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:09.854 17:16:25 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.854 17:16:25 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:09.854 17:16:25 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:09.854 17:16:25 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:09.854 17:16:25 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:10.419 17:16:26 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:11.797 17:16:27 -- target/perf_adq.sh@54 -- # sleep 5 00:24:17.064 17:16:32 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:17.064 17:16:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:17.064 17:16:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.064 17:16:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:17.064 17:16:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:17.064 17:16:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:17.065 17:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.065 17:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.065 17:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.065 17:16:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:17.065 17:16:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:17.065 17:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 17:16:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:17.065 17:16:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:17.065 17:16:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:17.065 17:16:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:17.065 17:16:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:17.065 17:16:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:17.065 17:16:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:17.065 17:16:32 -- nvmf/common.sh@294 -- # net_devs=() 00:24:17.065 17:16:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:17.065 17:16:32 -- nvmf/common.sh@295 -- # e810=() 00:24:17.065 17:16:32 -- nvmf/common.sh@295 -- # local -ga e810 00:24:17.065 17:16:32 -- nvmf/common.sh@296 -- # x722=() 00:24:17.065 17:16:32 -- nvmf/common.sh@296 -- # local -ga x722 00:24:17.065 17:16:32 -- nvmf/common.sh@297 -- # mlx=() 00:24:17.065 17:16:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:17.065 17:16:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.065 17:16:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:17.065 17:16:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:17.065 17:16:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:17.065 17:16:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.065 17:16:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:17.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:17.065 17:16:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:17.065 17:16:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:17.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:17.065 17:16:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:17.065 17:16:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.065 17:16:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.065 17:16:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.065 17:16:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.065 17:16:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:17.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:17.065 17:16:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.065 17:16:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:17.065 17:16:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.065 17:16:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:17.065 17:16:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.065 17:16:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:17.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:17.065 17:16:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.065 17:16:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:17.065 17:16:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:17.065 17:16:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:17.065 17:16:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:17.065 17:16:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.065 17:16:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.065 17:16:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.065 17:16:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:17.065 17:16:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.065 17:16:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.065 17:16:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:17.065 17:16:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.065 17:16:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.065 17:16:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:17.065 17:16:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:17.065 17:16:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.065 17:16:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.065 17:16:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.065 17:16:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.065 17:16:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:17.065 17:16:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.065 17:16:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.065 17:16:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.065 17:16:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:17.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:24:17.065 00:24:17.065 --- 10.0.0.2 ping statistics --- 00:24:17.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.065 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:17.065 17:16:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:24:17.065 00:24:17.065 --- 10.0.0.1 ping statistics --- 00:24:17.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.065 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:17.065 17:16:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.065 17:16:33 -- nvmf/common.sh@410 -- # return 0 00:24:17.065 17:16:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:17.065 17:16:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.065 17:16:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:17.065 17:16:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:17.065 17:16:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.065 17:16:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:17.065 17:16:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:17.065 17:16:33 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:17.065 17:16:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:17.065 17:16:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:17.065 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 17:16:33 -- nvmf/common.sh@469 -- # nvmfpid=605121 00:24:17.065 17:16:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:17.065 17:16:33 -- nvmf/common.sh@470 -- # waitforlisten 605121 00:24:17.065 17:16:33 -- common/autotest_common.sh@819 -- # '[' -z 605121 ']' 00:24:17.065 17:16:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.065 17:16:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:17.065 17:16:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.065 17:16:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:17.065 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 [2024-07-20 17:16:33.190544] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:17.065 [2024-07-20 17:16:33.190626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.323 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.323 [2024-07-20 17:16:33.262680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.323 [2024-07-20 17:16:33.358649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:17.323 [2024-07-20 17:16:33.358839] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.323 [2024-07-20 17:16:33.358859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.323 [2024-07-20 17:16:33.358873] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.323 [2024-07-20 17:16:33.358936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.323 [2024-07-20 17:16:33.358977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.323 [2024-07-20 17:16:33.359000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.323 [2024-07-20 17:16:33.359004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.323 17:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:17.323 17:16:33 -- common/autotest_common.sh@852 -- # return 0 00:24:17.323 17:16:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:17.323 17:16:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:17.323 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.323 17:16:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.323 17:16:33 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:17.323 17:16:33 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:17.323 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.323 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.323 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.323 17:16:33 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:17.323 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.323 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.583 17:16:33 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:17.583 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.583 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 [2024-07-20 17:16:33.555292] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.583 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.583 17:16:33 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:17.583 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.583 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 Malloc1 00:24:17.583 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.583 17:16:33 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.583 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.583 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.583 17:16:33 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:17.583 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.583 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.583 17:16:33 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.583 17:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:17.583 17:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:17.583 [2024-07-20 17:16:33.605984] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.583 17:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:17.583 17:16:33 -- target/perf_adq.sh@73 -- # perfpid=605269 00:24:17.583 17:16:33 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:17.583 17:16:33 -- target/perf_adq.sh@74 -- # sleep 2 00:24:17.583 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.478 17:16:35 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:19.478 17:16:35 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:19.478 17:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.478 17:16:35 -- target/perf_adq.sh@76 -- # wc -l 00:24:19.478 17:16:35 -- common/autotest_common.sh@10 -- # set +x 00:24:19.478 17:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.736 17:16:35 -- target/perf_adq.sh@76 -- # count=4 00:24:19.736 17:16:35 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:19.736 17:16:35 -- target/perf_adq.sh@81 -- # wait 605269 00:24:27.860 Initializing NVMe Controllers 00:24:27.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:27.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:27.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:27.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:27.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:27.861 Initialization complete. Launching workers. 00:24:27.861 ======================================================== 00:24:27.861 Latency(us) 00:24:27.861 Device Information : IOPS MiB/s Average min max 00:24:27.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11314.41 44.20 5657.68 3161.96 13373.33 00:24:27.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9282.64 36.26 6895.39 1354.46 10717.76 00:24:27.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11286.11 44.09 5671.03 3218.95 8808.94 00:24:27.861 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11328.81 44.25 5649.98 2507.53 11195.97 00:24:27.861 ======================================================== 00:24:27.861 Total : 43211.98 168.80 5925.03 1354.46 13373.33 00:24:27.861 00:24:27.861 17:16:43 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:27.861 17:16:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:27.861 17:16:43 -- nvmf/common.sh@116 -- # sync 00:24:27.861 17:16:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:27.861 17:16:43 -- nvmf/common.sh@119 -- # set +e 00:24:27.861 17:16:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:27.861 17:16:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:27.861 rmmod nvme_tcp 00:24:27.861 rmmod nvme_fabrics 00:24:27.861 rmmod nvme_keyring 00:24:27.861 17:16:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:27.861 17:16:43 -- nvmf/common.sh@123 -- # set -e 00:24:27.861 17:16:43 -- nvmf/common.sh@124 -- # return 0 00:24:27.861 17:16:43 -- nvmf/common.sh@477 -- # '[' -n 605121 ']' 00:24:27.861 17:16:43 -- nvmf/common.sh@478 -- # killprocess 605121 00:24:27.861 17:16:43 -- common/autotest_common.sh@926 -- # '[' -z 605121 ']' 00:24:27.861 17:16:43 -- common/autotest_common.sh@930 -- # kill -0 605121 00:24:27.861 17:16:43 -- common/autotest_common.sh@931 -- # uname 00:24:27.861 17:16:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:27.861 17:16:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 605121 00:24:27.861 17:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:27.861 17:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:27.861 17:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 605121' 00:24:27.861 killing process with pid 605121 00:24:27.861 17:16:43 -- common/autotest_common.sh@945 -- # kill 605121 00:24:27.861 17:16:43 -- common/autotest_common.sh@950 -- # wait 605121 00:24:28.118 17:16:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.118 17:16:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:28.118 17:16:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:28.118 17:16:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.118 17:16:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:28.118 17:16:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.118 17:16:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.118 17:16:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.014 17:16:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:30.014 17:16:46 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:30.014 17:16:46 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:30.580 17:16:46 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:31.950 17:16:48 -- target/perf_adq.sh@54 -- # sleep 5 00:24:37.217 17:16:53 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:37.218 17:16:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:37.218 17:16:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.218 17:16:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:37.218 17:16:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:37.218 17:16:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:37.218 17:16:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.218 17:16:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.218 17:16:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.218 17:16:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:37.218 17:16:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:37.218 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.218 17:16:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:37.218 17:16:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:37.218 17:16:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:37.218 17:16:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:37.218 17:16:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:37.218 17:16:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:37.218 17:16:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:37.218 17:16:53 -- nvmf/common.sh@294 -- # net_devs=() 00:24:37.218 17:16:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:37.218 17:16:53 -- nvmf/common.sh@295 -- # e810=() 00:24:37.218 17:16:53 -- nvmf/common.sh@295 -- # local -ga e810 00:24:37.218 17:16:53 -- nvmf/common.sh@296 -- # x722=() 00:24:37.218 17:16:53 -- nvmf/common.sh@296 -- # local -ga x722 00:24:37.218 17:16:53 -- nvmf/common.sh@297 -- # mlx=() 00:24:37.218 17:16:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:37.218 17:16:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.218 17:16:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:37.218 17:16:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:37.218 17:16:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:37.218 17:16:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.218 17:16:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:37.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:37.218 17:16:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:37.218 17:16:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:37.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:37.218 17:16:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:37.218 17:16:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.218 17:16:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.218 17:16:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.218 17:16:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.218 17:16:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:37.218 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:37.218 17:16:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.218 17:16:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:37.218 17:16:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.218 17:16:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:37.218 17:16:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.218 17:16:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:37.218 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:37.218 17:16:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.218 17:16:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:37.218 17:16:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:37.218 17:16:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:37.218 17:16:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.218 17:16:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.218 17:16:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.218 17:16:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:37.218 17:16:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.218 17:16:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.218 17:16:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:37.218 17:16:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.218 17:16:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.218 17:16:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:37.218 17:16:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:37.218 17:16:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.218 17:16:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.218 17:16:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.218 17:16:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.218 17:16:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:37.218 17:16:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.218 17:16:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.218 17:16:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.218 17:16:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:37.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:24:37.218 00:24:37.218 --- 10.0.0.2 ping statistics --- 00:24:37.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.218 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:37.218 17:16:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:24:37.218 00:24:37.218 --- 10.0.0.1 ping statistics --- 00:24:37.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.218 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:37.218 17:16:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.218 17:16:53 -- nvmf/common.sh@410 -- # return 0 00:24:37.218 17:16:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:37.218 17:16:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.218 17:16:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:37.218 17:16:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.218 17:16:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:37.218 17:16:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:37.218 17:16:53 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:37.218 17:16:53 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:37.218 17:16:53 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:37.218 17:16:53 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:37.218 net.core.busy_poll = 1 00:24:37.218 17:16:53 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:37.218 net.core.busy_read = 1 00:24:37.218 17:16:53 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:37.218 17:16:53 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:37.218 17:16:53 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:37.218 17:16:53 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:37.218 17:16:53 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:37.218 17:16:53 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:37.218 17:16:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:37.218 17:16:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:37.218 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.475 17:16:53 -- nvmf/common.sh@469 -- # nvmfpid=607835 00:24:37.475 17:16:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:37.475 17:16:53 -- nvmf/common.sh@470 -- # waitforlisten 607835 00:24:37.475 17:16:53 -- common/autotest_common.sh@819 -- # '[' -z 607835 ']' 00:24:37.475 17:16:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.475 17:16:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:37.475 17:16:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.475 17:16:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:37.475 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.475 [2024-07-20 17:16:53.424654] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:37.475 [2024-07-20 17:16:53.424736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.475 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.475 [2024-07-20 17:16:53.491090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.475 [2024-07-20 17:16:53.578394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:37.475 [2024-07-20 17:16:53.578539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.475 [2024-07-20 17:16:53.578556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.475 [2024-07-20 17:16:53.578568] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.475 [2024-07-20 17:16:53.578626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.475 [2024-07-20 17:16:53.578683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.475 [2024-07-20 17:16:53.578751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.475 [2024-07-20 17:16:53.578753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.475 17:16:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:37.475 17:16:53 -- common/autotest_common.sh@852 -- # return 0 00:24:37.475 17:16:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:37.475 17:16:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:37.475 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 17:16:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.732 17:16:53 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:37.732 17:16:53 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 [2024-07-20 17:16:53.768504] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 Malloc1 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.732 17:16:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.732 17:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 [2024-07-20 17:16:53.819902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.732 17:16:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.732 17:16:53 -- target/perf_adq.sh@94 -- # perfpid=607977 00:24:37.732 17:16:53 -- target/perf_adq.sh@95 -- # sleep 2 00:24:37.732 17:16:53 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:37.732 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.260 17:16:55 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:40.260 17:16:55 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:40.260 17:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.260 17:16:55 -- target/perf_adq.sh@97 -- # wc -l 00:24:40.260 17:16:55 -- common/autotest_common.sh@10 -- # set +x 00:24:40.260 17:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.260 17:16:55 -- target/perf_adq.sh@97 -- # count=2 00:24:40.260 17:16:55 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:40.260 17:16:55 -- target/perf_adq.sh@103 -- # wait 607977 00:24:48.364 Initializing NVMe Controllers 00:24:48.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:48.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:48.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:48.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:48.364 Initialization complete. Launching workers. 00:24:48.364 ======================================================== 00:24:48.364 Latency(us) 00:24:48.364 Device Information : IOPS MiB/s Average min max 00:24:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7954.47 31.07 8046.86 1746.66 53107.48 00:24:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6722.97 26.26 9521.25 1855.05 53554.26 00:24:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7075.87 27.64 9069.09 1843.19 54536.13 00:24:48.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5902.18 23.06 10871.26 1919.79 55928.62 00:24:48.364 ======================================================== 00:24:48.364 Total : 27655.49 108.03 9269.60 1746.66 55928.62 00:24:48.364 00:24:48.364 17:17:03 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:48.364 17:17:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:48.364 17:17:03 -- nvmf/common.sh@116 -- # sync 00:24:48.364 17:17:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:48.364 17:17:03 -- nvmf/common.sh@119 -- # set +e 00:24:48.364 17:17:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:48.364 17:17:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:48.364 rmmod nvme_tcp 00:24:48.364 rmmod nvme_fabrics 00:24:48.364 rmmod nvme_keyring 00:24:48.364 17:17:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:48.364 17:17:04 -- nvmf/common.sh@123 -- # set -e 00:24:48.364 17:17:04 -- nvmf/common.sh@124 -- # return 0 00:24:48.364 17:17:04 -- nvmf/common.sh@477 -- # '[' -n 607835 ']' 00:24:48.364 17:17:04 -- nvmf/common.sh@478 -- # killprocess 607835 00:24:48.364 17:17:04 -- common/autotest_common.sh@926 -- # '[' -z 607835 ']' 00:24:48.364 17:17:04 -- common/autotest_common.sh@930 -- # kill -0 607835 00:24:48.364 17:17:04 -- common/autotest_common.sh@931 -- # uname 00:24:48.364 17:17:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:48.364 17:17:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 607835 00:24:48.364 17:17:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:48.364 17:17:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:48.364 17:17:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 607835' 00:24:48.364 killing process with pid 607835 00:24:48.364 17:17:04 -- common/autotest_common.sh@945 -- # kill 607835 00:24:48.364 17:17:04 -- common/autotest_common.sh@950 -- # wait 607835 00:24:48.364 17:17:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:48.364 17:17:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:48.364 17:17:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:48.364 17:17:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.364 17:17:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:48.364 17:17:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.364 17:17:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.364 17:17:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.643 17:17:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:51.643 17:17:07 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:51.643 00:24:51.643 real 0m43.517s 00:24:51.643 user 2m24.218s 00:24:51.643 sys 0m14.599s 00:24:51.643 17:17:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.643 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.643 ************************************ 00:24:51.643 END TEST nvmf_perf_adq 00:24:51.643 ************************************ 00:24:51.643 17:17:07 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:51.643 17:17:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:51.643 17:17:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:51.643 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.643 ************************************ 00:24:51.643 START TEST nvmf_shutdown 00:24:51.643 ************************************ 00:24:51.643 17:17:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:51.643 * Looking for test storage... 00:24:51.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:51.643 17:17:07 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.643 17:17:07 -- nvmf/common.sh@7 -- # uname -s 00:24:51.643 17:17:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.643 17:17:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.643 17:17:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.643 17:17:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.643 17:17:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.643 17:17:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.643 17:17:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.643 17:17:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.643 17:17:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.643 17:17:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.643 17:17:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.643 17:17:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.643 17:17:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.643 17:17:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.643 17:17:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.643 17:17:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.643 17:17:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.643 17:17:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.643 17:17:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.643 17:17:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 17:17:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 17:17:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 17:17:07 -- paths/export.sh@5 -- # export PATH 00:24:51.643 17:17:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.643 17:17:07 -- nvmf/common.sh@46 -- # : 0 00:24:51.643 17:17:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:51.643 17:17:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:51.643 17:17:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:51.643 17:17:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.643 17:17:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.643 17:17:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:51.643 17:17:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:51.643 17:17:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:51.643 17:17:07 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.643 17:17:07 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.643 17:17:07 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:51.643 17:17:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:51.643 17:17:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:51.643 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:24:51.643 ************************************ 00:24:51.643 START TEST nvmf_shutdown_tc1 00:24:51.643 ************************************ 00:24:51.643 17:17:07 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:51.643 17:17:07 -- target/shutdown.sh@74 -- # starttarget 00:24:51.643 17:17:07 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:51.643 17:17:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:51.643 17:17:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.643 17:17:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:51.643 17:17:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:51.643 17:17:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:51.643 17:17:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.643 17:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.643 17:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.643 17:17:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:51.643 17:17:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:51.643 17:17:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:51.643 17:17:07 -- common/autotest_common.sh@10 -- # set +x 00:24:53.543 17:17:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:53.543 17:17:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:53.543 17:17:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:53.543 17:17:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:53.543 17:17:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:53.543 17:17:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:53.543 17:17:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:53.543 17:17:09 -- nvmf/common.sh@294 -- # net_devs=() 00:24:53.543 17:17:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:53.543 17:17:09 -- nvmf/common.sh@295 -- # e810=() 00:24:53.543 17:17:09 -- nvmf/common.sh@295 -- # local -ga e810 00:24:53.543 17:17:09 -- nvmf/common.sh@296 -- # x722=() 00:24:53.543 17:17:09 -- nvmf/common.sh@296 -- # local -ga x722 00:24:53.543 17:17:09 -- nvmf/common.sh@297 -- # mlx=() 00:24:53.543 17:17:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:53.543 17:17:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.543 17:17:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:53.543 17:17:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:53.543 17:17:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:53.543 17:17:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:53.543 17:17:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:53.543 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:53.543 17:17:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:53.543 17:17:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:53.543 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:53.543 17:17:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:53.543 17:17:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:53.543 17:17:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.543 17:17:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:53.543 17:17:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.543 17:17:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:53.543 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:53.543 17:17:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.543 17:17:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:53.543 17:17:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.543 17:17:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:53.543 17:17:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.543 17:17:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:53.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:53.543 17:17:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.543 17:17:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:53.543 17:17:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:53.543 17:17:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:53.543 17:17:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.543 17:17:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.543 17:17:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.543 17:17:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:53.543 17:17:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.543 17:17:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.543 17:17:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:53.543 17:17:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.543 17:17:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.543 17:17:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:53.543 17:17:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:53.543 17:17:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.543 17:17:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.543 17:17:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.543 17:17:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.543 17:17:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:53.543 17:17:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.543 17:17:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.543 17:17:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.543 17:17:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:53.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:24:53.543 00:24:53.543 --- 10.0.0.2 ping statistics --- 00:24:53.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.543 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:24:53.543 17:17:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:53.543 00:24:53.543 --- 10.0.0.1 ping statistics --- 00:24:53.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.543 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:53.543 17:17:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.543 17:17:09 -- nvmf/common.sh@410 -- # return 0 00:24:53.543 17:17:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:53.543 17:17:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.543 17:17:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:53.543 17:17:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.543 17:17:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:53.543 17:17:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:53.543 17:17:09 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:53.543 17:17:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:53.543 17:17:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:53.543 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:24:53.543 17:17:09 -- nvmf/common.sh@469 -- # nvmfpid=611218 00:24:53.543 17:17:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:53.543 17:17:09 -- nvmf/common.sh@470 -- # waitforlisten 611218 00:24:53.543 17:17:09 -- common/autotest_common.sh@819 -- # '[' -z 611218 ']' 00:24:53.543 17:17:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.543 17:17:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:53.543 17:17:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.544 17:17:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:53.544 17:17:09 -- common/autotest_common.sh@10 -- # set +x 00:24:53.544 [2024-07-20 17:17:09.682529] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:53.544 [2024-07-20 17:17:09.682615] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.801 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.801 [2024-07-20 17:17:09.748374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.801 [2024-07-20 17:17:09.833595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:53.801 [2024-07-20 17:17:09.833747] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.801 [2024-07-20 17:17:09.833764] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.801 [2024-07-20 17:17:09.833777] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.801 [2024-07-20 17:17:09.833873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.801 [2024-07-20 17:17:09.833937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.801 [2024-07-20 17:17:09.833990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:53.801 [2024-07-20 17:17:09.833992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.734 17:17:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:54.734 17:17:10 -- common/autotest_common.sh@852 -- # return 0 00:24:54.734 17:17:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:54.734 17:17:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:54.734 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:24:54.734 17:17:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.734 17:17:10 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.734 17:17:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.734 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:24:54.734 [2024-07-20 17:17:10.713603] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.734 17:17:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.734 17:17:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:54.734 17:17:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:54.734 17:17:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:54.734 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:24:54.734 17:17:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.734 17:17:10 -- target/shutdown.sh@28 -- # cat 00:24:54.734 17:17:10 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:54.734 17:17:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.734 17:17:10 -- common/autotest_common.sh@10 -- # set +x 00:24:54.734 Malloc1 00:24:54.734 [2024-07-20 17:17:10.801439] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.734 Malloc2 00:24:54.734 Malloc3 00:24:54.991 Malloc4 00:24:54.991 Malloc5 00:24:54.991 Malloc6 00:24:54.991 Malloc7 00:24:54.991 Malloc8 00:24:55.249 Malloc9 00:24:55.249 Malloc10 00:24:55.249 17:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.249 17:17:11 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:55.249 17:17:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:55.249 17:17:11 -- common/autotest_common.sh@10 -- # set +x 00:24:55.249 17:17:11 -- target/shutdown.sh@78 -- # perfpid=611518 00:24:55.249 17:17:11 -- target/shutdown.sh@79 -- # waitforlisten 611518 /var/tmp/bdevperf.sock 00:24:55.249 17:17:11 -- common/autotest_common.sh@819 -- # '[' -z 611518 ']' 00:24:55.249 17:17:11 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:55.249 17:17:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.249 17:17:11 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:55.249 17:17:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.249 17:17:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.249 17:17:11 -- nvmf/common.sh@520 -- # config=() 00:24:55.249 17:17:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.249 17:17:11 -- nvmf/common.sh@520 -- # local subsystem config 00:24:55.249 17:17:11 -- common/autotest_common.sh@10 -- # set +x 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.249 "hdgst": ${hdgst:-false}, 00:24:55.249 "ddgst": ${ddgst:-false} 00:24:55.249 }, 00:24:55.249 "method": "bdev_nvme_attach_controller" 00:24:55.249 } 00:24:55.249 EOF 00:24:55.249 )") 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.249 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.249 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.249 { 00:24:55.249 "params": { 00:24:55.249 "name": "Nvme$subsystem", 00:24:55.249 "trtype": "$TEST_TRANSPORT", 00:24:55.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.249 "adrfam": "ipv4", 00:24:55.249 "trsvcid": "$NVMF_PORT", 00:24:55.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.250 "hdgst": ${hdgst:-false}, 00:24:55.250 "ddgst": ${ddgst:-false} 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 } 00:24:55.250 EOF 00:24:55.250 )") 00:24:55.250 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.250 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.250 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.250 { 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme$subsystem", 00:24:55.250 "trtype": "$TEST_TRANSPORT", 00:24:55.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "$NVMF_PORT", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.250 "hdgst": ${hdgst:-false}, 00:24:55.250 "ddgst": ${ddgst:-false} 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 } 00:24:55.250 EOF 00:24:55.250 )") 00:24:55.250 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.250 17:17:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:55.250 17:17:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:55.250 { 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme$subsystem", 00:24:55.250 "trtype": "$TEST_TRANSPORT", 00:24:55.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "$NVMF_PORT", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.250 "hdgst": ${hdgst:-false}, 00:24:55.250 "ddgst": ${ddgst:-false} 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 } 00:24:55.250 EOF 00:24:55.250 )") 00:24:55.250 17:17:11 -- nvmf/common.sh@542 -- # cat 00:24:55.250 17:17:11 -- nvmf/common.sh@544 -- # jq . 00:24:55.250 17:17:11 -- nvmf/common.sh@545 -- # IFS=, 00:24:55.250 17:17:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme1", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme2", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme3", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme4", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme5", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme6", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme7", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme8", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme9", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 },{ 00:24:55.250 "params": { 00:24:55.250 "name": "Nvme10", 00:24:55.250 "trtype": "tcp", 00:24:55.250 "traddr": "10.0.0.2", 00:24:55.250 "adrfam": "ipv4", 00:24:55.250 "trsvcid": "4420", 00:24:55.250 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:55.250 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:55.250 "hdgst": false, 00:24:55.250 "ddgst": false 00:24:55.250 }, 00:24:55.250 "method": "bdev_nvme_attach_controller" 00:24:55.250 }' 00:24:55.250 [2024-07-20 17:17:11.320385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:55.250 [2024-07-20 17:17:11.320460] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:55.250 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.250 [2024-07-20 17:17:11.386621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.507 [2024-07-20 17:17:11.471877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.876 17:17:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.876 17:17:12 -- common/autotest_common.sh@852 -- # return 0 00:24:56.876 17:17:12 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:56.876 17:17:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.877 17:17:12 -- common/autotest_common.sh@10 -- # set +x 00:24:56.877 17:17:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.877 17:17:12 -- target/shutdown.sh@83 -- # kill -9 611518 00:24:56.877 17:17:12 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:56.877 17:17:12 -- target/shutdown.sh@87 -- # sleep 1 00:24:58.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 611518 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:58.243 17:17:13 -- target/shutdown.sh@88 -- # kill -0 611218 00:24:58.243 17:17:14 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:58.243 17:17:14 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:58.243 17:17:14 -- nvmf/common.sh@520 -- # config=() 00:24:58.243 17:17:14 -- nvmf/common.sh@520 -- # local subsystem config 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.243 "method": "bdev_nvme_attach_controller" 00:24:58.243 } 00:24:58.243 EOF 00:24:58.243 )") 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.243 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.243 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.243 { 00:24:58.243 "params": { 00:24:58.243 "name": "Nvme$subsystem", 00:24:58.243 "trtype": "$TEST_TRANSPORT", 00:24:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.243 "adrfam": "ipv4", 00:24:58.243 "trsvcid": "$NVMF_PORT", 00:24:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.243 "hdgst": ${hdgst:-false}, 00:24:58.243 "ddgst": ${ddgst:-false} 00:24:58.243 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 } 00:24:58.244 EOF 00:24:58.244 )") 00:24:58.244 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.244 17:17:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:58.244 17:17:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:58.244 { 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme$subsystem", 00:24:58.244 "trtype": "$TEST_TRANSPORT", 00:24:58.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "$NVMF_PORT", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:58.244 "hdgst": ${hdgst:-false}, 00:24:58.244 "ddgst": ${ddgst:-false} 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 } 00:24:58.244 EOF 00:24:58.244 )") 00:24:58.244 17:17:14 -- nvmf/common.sh@542 -- # cat 00:24:58.244 17:17:14 -- nvmf/common.sh@544 -- # jq . 00:24:58.244 17:17:14 -- nvmf/common.sh@545 -- # IFS=, 00:24:58.244 17:17:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme1", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme2", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme3", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme4", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme5", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme6", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme7", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme8", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme9", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 },{ 00:24:58.244 "params": { 00:24:58.244 "name": "Nvme10", 00:24:58.244 "trtype": "tcp", 00:24:58.244 "traddr": "10.0.0.2", 00:24:58.244 "adrfam": "ipv4", 00:24:58.244 "trsvcid": "4420", 00:24:58.244 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:58.244 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:58.244 "hdgst": false, 00:24:58.244 "ddgst": false 00:24:58.244 }, 00:24:58.244 "method": "bdev_nvme_attach_controller" 00:24:58.244 }' 00:24:58.244 [2024-07-20 17:17:14.044222] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:58.244 [2024-07-20 17:17:14.044321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611834 ] 00:24:58.244 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.244 [2024-07-20 17:17:14.109763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.244 [2024-07-20 17:17:14.196882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.613 Running I/O for 1 seconds... 00:25:00.986 00:25:00.986 Latency(us) 00:25:00.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme1n1 : 1.08 358.55 22.41 0.00 0.00 172621.17 19709.35 154567.87 00:25:00.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme2n1 : 1.10 360.65 22.54 0.00 0.00 172015.35 31068.92 143693.75 00:25:00.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme3n1 : 1.11 320.97 20.06 0.00 0.00 192307.58 27573.67 184860.07 00:25:00.986 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme4n1 : 1.11 359.39 22.46 0.00 0.00 170192.59 31068.92 131266.18 00:25:00.986 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme5n1 : 1.10 326.91 20.43 0.00 0.00 185046.94 13883.92 148354.09 00:25:00.986 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme6n1 : 1.09 365.85 22.87 0.00 0.00 164564.27 35535.08 126605.84 00:25:00.986 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme7n1 : 1.09 363.98 22.75 0.00 0.00 164185.67 36311.80 138256.69 00:25:00.986 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme8n1 : 1.11 358.56 22.41 0.00 0.00 165784.98 30292.20 135149.80 00:25:00.986 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme9n1 : 1.11 357.12 22.32 0.00 0.00 166291.65 23107.51 154567.87 00:25:00.986 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:00.986 Verification LBA range: start 0x0 length 0x400 00:25:00.986 Nvme10n1 : 1.12 356.33 22.27 0.00 0.00 165788.15 20874.43 132042.90 00:25:00.986 =================================================================================================================== 00:25:00.986 Total : 3528.31 220.52 0.00 0.00 171519.93 13883.92 184860.07 00:25:00.986 17:17:17 -- target/shutdown.sh@93 -- # stoptarget 00:25:00.986 17:17:17 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:00.986 17:17:17 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:00.986 17:17:17 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:00.986 17:17:17 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:00.986 17:17:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:00.986 17:17:17 -- nvmf/common.sh@116 -- # sync 00:25:00.986 17:17:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:00.986 17:17:17 -- nvmf/common.sh@119 -- # set +e 00:25:00.986 17:17:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:00.986 17:17:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:00.986 rmmod nvme_tcp 00:25:00.986 rmmod nvme_fabrics 00:25:00.986 rmmod nvme_keyring 00:25:01.244 17:17:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:01.244 17:17:17 -- nvmf/common.sh@123 -- # set -e 00:25:01.244 17:17:17 -- nvmf/common.sh@124 -- # return 0 00:25:01.244 17:17:17 -- nvmf/common.sh@477 -- # '[' -n 611218 ']' 00:25:01.244 17:17:17 -- nvmf/common.sh@478 -- # killprocess 611218 00:25:01.244 17:17:17 -- common/autotest_common.sh@926 -- # '[' -z 611218 ']' 00:25:01.244 17:17:17 -- common/autotest_common.sh@930 -- # kill -0 611218 00:25:01.244 17:17:17 -- common/autotest_common.sh@931 -- # uname 00:25:01.244 17:17:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:01.244 17:17:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 611218 00:25:01.244 17:17:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:01.244 17:17:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:01.244 17:17:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 611218' 00:25:01.244 killing process with pid 611218 00:25:01.244 17:17:17 -- common/autotest_common.sh@945 -- # kill 611218 00:25:01.244 17:17:17 -- common/autotest_common.sh@950 -- # wait 611218 00:25:01.808 17:17:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:01.808 17:17:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:01.808 17:17:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:01.808 17:17:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.808 17:17:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:01.808 17:17:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.808 17:17:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.808 17:17:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.707 17:17:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:03.707 00:25:03.707 real 0m12.253s 00:25:03.707 user 0m35.822s 00:25:03.707 sys 0m3.280s 00:25:03.707 17:17:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.707 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:25:03.707 ************************************ 00:25:03.707 END TEST nvmf_shutdown_tc1 00:25:03.707 ************************************ 00:25:03.707 17:17:19 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:03.707 17:17:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.707 17:17:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.707 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:25:03.707 ************************************ 00:25:03.707 START TEST nvmf_shutdown_tc2 00:25:03.707 ************************************ 00:25:03.707 17:17:19 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:03.707 17:17:19 -- target/shutdown.sh@98 -- # starttarget 00:25:03.707 17:17:19 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:03.707 17:17:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:03.707 17:17:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.707 17:17:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:03.707 17:17:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:03.707 17:17:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:03.707 17:17:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.707 17:17:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.707 17:17:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.707 17:17:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:03.707 17:17:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:03.707 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:25:03.707 17:17:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.707 17:17:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:03.707 17:17:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:03.707 17:17:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:03.707 17:17:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:03.707 17:17:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:03.707 17:17:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:03.707 17:17:19 -- nvmf/common.sh@294 -- # net_devs=() 00:25:03.707 17:17:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:03.707 17:17:19 -- nvmf/common.sh@295 -- # e810=() 00:25:03.707 17:17:19 -- nvmf/common.sh@295 -- # local -ga e810 00:25:03.707 17:17:19 -- nvmf/common.sh@296 -- # x722=() 00:25:03.707 17:17:19 -- nvmf/common.sh@296 -- # local -ga x722 00:25:03.707 17:17:19 -- nvmf/common.sh@297 -- # mlx=() 00:25:03.707 17:17:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:03.707 17:17:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.707 17:17:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:03.707 17:17:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:03.707 17:17:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:03.707 17:17:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.707 17:17:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:03.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:03.707 17:17:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:03.707 17:17:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:03.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:03.707 17:17:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:03.707 17:17:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.707 17:17:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.707 17:17:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.707 17:17:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.707 17:17:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:03.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:03.707 17:17:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.707 17:17:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:03.707 17:17:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.707 17:17:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:03.707 17:17:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.707 17:17:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:03.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:03.707 17:17:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.707 17:17:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:03.707 17:17:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:03.707 17:17:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:03.707 17:17:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:03.707 17:17:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.707 17:17:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.707 17:17:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.707 17:17:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:03.707 17:17:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.707 17:17:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.707 17:17:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:03.707 17:17:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.707 17:17:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.707 17:17:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:03.707 17:17:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:03.707 17:17:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.708 17:17:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.708 17:17:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.708 17:17:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.708 17:17:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:03.708 17:17:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.965 17:17:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.965 17:17:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.965 17:17:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:03.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:25:03.965 00:25:03.965 --- 10.0.0.2 ping statistics --- 00:25:03.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.965 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:25:03.965 17:17:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:25:03.965 00:25:03.965 --- 10.0.0.1 ping statistics --- 00:25:03.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.965 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:03.965 17:17:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.965 17:17:19 -- nvmf/common.sh@410 -- # return 0 00:25:03.965 17:17:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:03.965 17:17:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.965 17:17:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:03.966 17:17:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:03.966 17:17:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.966 17:17:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:03.966 17:17:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:03.966 17:17:19 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:03.966 17:17:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:03.966 17:17:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.966 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:25:03.966 17:17:19 -- nvmf/common.sh@469 -- # nvmfpid=612678 00:25:03.966 17:17:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:03.966 17:17:19 -- nvmf/common.sh@470 -- # waitforlisten 612678 00:25:03.966 17:17:19 -- common/autotest_common.sh@819 -- # '[' -z 612678 ']' 00:25:03.966 17:17:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.966 17:17:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.966 17:17:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.966 17:17:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.966 17:17:19 -- common/autotest_common.sh@10 -- # set +x 00:25:03.966 [2024-07-20 17:17:19.967217] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:03.966 [2024-07-20 17:17:19.967312] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.966 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.966 [2024-07-20 17:17:20.039581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.223 [2024-07-20 17:17:20.130396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:04.223 [2024-07-20 17:17:20.130550] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.223 [2024-07-20 17:17:20.130569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.223 [2024-07-20 17:17:20.130582] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.223 [2024-07-20 17:17:20.130738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.223 [2024-07-20 17:17:20.130766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.223 [2024-07-20 17:17:20.130798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:04.223 [2024-07-20 17:17:20.130800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.789 17:17:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.789 17:17:20 -- common/autotest_common.sh@852 -- # return 0 00:25:04.789 17:17:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:04.789 17:17:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:04.789 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.049 17:17:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.049 17:17:20 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:05.049 17:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.049 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.049 [2024-07-20 17:17:20.971437] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.049 17:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.049 17:17:20 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:05.049 17:17:20 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:05.049 17:17:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:05.049 17:17:20 -- common/autotest_common.sh@10 -- # set +x 00:25:05.049 17:17:20 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:20 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:20 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:21 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:21 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:05.049 17:17:21 -- target/shutdown.sh@28 -- # cat 00:25:05.049 17:17:21 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:05.049 17:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.049 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:25:05.049 Malloc1 00:25:05.049 [2024-07-20 17:17:21.051330] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.049 Malloc2 00:25:05.049 Malloc3 00:25:05.049 Malloc4 00:25:05.308 Malloc5 00:25:05.308 Malloc6 00:25:05.308 Malloc7 00:25:05.308 Malloc8 00:25:05.308 Malloc9 00:25:05.566 Malloc10 00:25:05.566 17:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.566 17:17:21 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:05.566 17:17:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:05.566 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:25:05.566 17:17:21 -- target/shutdown.sh@102 -- # perfpid=612930 00:25:05.566 17:17:21 -- target/shutdown.sh@103 -- # waitforlisten 612930 /var/tmp/bdevperf.sock 00:25:05.566 17:17:21 -- common/autotest_common.sh@819 -- # '[' -z 612930 ']' 00:25:05.566 17:17:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.566 17:17:21 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:05.566 17:17:21 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:05.566 17:17:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.566 17:17:21 -- nvmf/common.sh@520 -- # config=() 00:25:05.566 17:17:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.566 17:17:21 -- nvmf/common.sh@520 -- # local subsystem config 00:25:05.566 17:17:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.566 17:17:21 -- common/autotest_common.sh@10 -- # set +x 00:25:05.566 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.566 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.566 { 00:25:05.566 "params": { 00:25:05.566 "name": "Nvme$subsystem", 00:25:05.566 "trtype": "$TEST_TRANSPORT", 00:25:05.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.566 "adrfam": "ipv4", 00:25:05.566 "trsvcid": "$NVMF_PORT", 00:25:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.566 "hdgst": ${hdgst:-false}, 00:25:05.566 "ddgst": ${ddgst:-false} 00:25:05.566 }, 00:25:05.566 "method": "bdev_nvme_attach_controller" 00:25:05.566 } 00:25:05.566 EOF 00:25:05.566 )") 00:25:05.566 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.566 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.566 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.566 { 00:25:05.566 "params": { 00:25:05.566 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:05.567 { 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme$subsystem", 00:25:05.567 "trtype": "$TEST_TRANSPORT", 00:25:05.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "$NVMF_PORT", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:05.567 "hdgst": ${hdgst:-false}, 00:25:05.567 "ddgst": ${ddgst:-false} 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 } 00:25:05.567 EOF 00:25:05.567 )") 00:25:05.567 17:17:21 -- nvmf/common.sh@542 -- # cat 00:25:05.567 17:17:21 -- nvmf/common.sh@544 -- # jq . 00:25:05.567 17:17:21 -- nvmf/common.sh@545 -- # IFS=, 00:25:05.567 17:17:21 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme1", 00:25:05.567 "trtype": "tcp", 00:25:05.567 "traddr": "10.0.0.2", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "4420", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.567 "hdgst": false, 00:25:05.567 "ddgst": false 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 },{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme2", 00:25:05.567 "trtype": "tcp", 00:25:05.567 "traddr": "10.0.0.2", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "4420", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:05.567 "hdgst": false, 00:25:05.567 "ddgst": false 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 },{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme3", 00:25:05.567 "trtype": "tcp", 00:25:05.567 "traddr": "10.0.0.2", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "4420", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:05.567 "hdgst": false, 00:25:05.567 "ddgst": false 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 },{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme4", 00:25:05.567 "trtype": "tcp", 00:25:05.567 "traddr": "10.0.0.2", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "4420", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:05.567 "hdgst": false, 00:25:05.567 "ddgst": false 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 },{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme5", 00:25:05.567 "trtype": "tcp", 00:25:05.567 "traddr": "10.0.0.2", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "4420", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:05.567 "hdgst": false, 00:25:05.567 "ddgst": false 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 },{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme6", 00:25:05.567 "trtype": "tcp", 00:25:05.567 "traddr": "10.0.0.2", 00:25:05.567 "adrfam": "ipv4", 00:25:05.567 "trsvcid": "4420", 00:25:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:05.567 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:05.567 "hdgst": false, 00:25:05.567 "ddgst": false 00:25:05.567 }, 00:25:05.567 "method": "bdev_nvme_attach_controller" 00:25:05.567 },{ 00:25:05.567 "params": { 00:25:05.567 "name": "Nvme7", 00:25:05.567 "trtype": "tcp", 00:25:05.568 "traddr": "10.0.0.2", 00:25:05.568 "adrfam": "ipv4", 00:25:05.568 "trsvcid": "4420", 00:25:05.568 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:05.568 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:05.568 "hdgst": false, 00:25:05.568 "ddgst": false 00:25:05.568 }, 00:25:05.568 "method": "bdev_nvme_attach_controller" 00:25:05.568 },{ 00:25:05.568 "params": { 00:25:05.568 "name": "Nvme8", 00:25:05.568 "trtype": "tcp", 00:25:05.568 "traddr": "10.0.0.2", 00:25:05.568 "adrfam": "ipv4", 00:25:05.568 "trsvcid": "4420", 00:25:05.568 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:05.568 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:05.568 "hdgst": false, 00:25:05.568 "ddgst": false 00:25:05.568 }, 00:25:05.568 "method": "bdev_nvme_attach_controller" 00:25:05.568 },{ 00:25:05.568 "params": { 00:25:05.568 "name": "Nvme9", 00:25:05.568 "trtype": "tcp", 00:25:05.568 "traddr": "10.0.0.2", 00:25:05.568 "adrfam": "ipv4", 00:25:05.568 "trsvcid": "4420", 00:25:05.568 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:05.568 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:05.568 "hdgst": false, 00:25:05.568 "ddgst": false 00:25:05.568 }, 00:25:05.568 "method": "bdev_nvme_attach_controller" 00:25:05.568 },{ 00:25:05.568 "params": { 00:25:05.568 "name": "Nvme10", 00:25:05.568 "trtype": "tcp", 00:25:05.568 "traddr": "10.0.0.2", 00:25:05.568 "adrfam": "ipv4", 00:25:05.568 "trsvcid": "4420", 00:25:05.568 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:05.568 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:05.568 "hdgst": false, 00:25:05.568 "ddgst": false 00:25:05.568 }, 00:25:05.568 "method": "bdev_nvme_attach_controller" 00:25:05.568 }' 00:25:05.568 [2024-07-20 17:17:21.563684] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:05.568 [2024-07-20 17:17:21.563757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612930 ] 00:25:05.568 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.568 [2024-07-20 17:17:21.627605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.568 [2024-07-20 17:17:21.711571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.464 Running I/O for 10 seconds... 00:25:07.464 17:17:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:07.464 17:17:23 -- common/autotest_common.sh@852 -- # return 0 00:25:07.464 17:17:23 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:07.464 17:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.464 17:17:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.464 17:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.464 17:17:23 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:07.464 17:17:23 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:07.464 17:17:23 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:07.464 17:17:23 -- target/shutdown.sh@57 -- # local ret=1 00:25:07.464 17:17:23 -- target/shutdown.sh@58 -- # local i 00:25:07.464 17:17:23 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:07.464 17:17:23 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:07.464 17:17:23 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:07.464 17:17:23 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:07.464 17:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.464 17:17:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.464 17:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.464 17:17:23 -- target/shutdown.sh@60 -- # read_io_count=3 00:25:07.464 17:17:23 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:25:07.464 17:17:23 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:07.464 17:17:23 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:07.464 17:17:23 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:07.464 17:17:23 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:07.464 17:17:23 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:07.464 17:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.464 17:17:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.464 17:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.722 17:17:23 -- target/shutdown.sh@60 -- # read_io_count=87 00:25:07.722 17:17:23 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:25:07.722 17:17:23 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:07.722 17:17:23 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:07.722 17:17:23 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:07.980 17:17:23 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:07.980 17:17:23 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:07.980 17:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:07.980 17:17:23 -- common/autotest_common.sh@10 -- # set +x 00:25:07.980 17:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.980 17:17:23 -- target/shutdown.sh@60 -- # read_io_count=213 00:25:07.980 17:17:23 -- target/shutdown.sh@63 -- # '[' 213 -ge 100 ']' 00:25:07.980 17:17:23 -- target/shutdown.sh@64 -- # ret=0 00:25:07.980 17:17:23 -- target/shutdown.sh@65 -- # break 00:25:07.980 17:17:23 -- target/shutdown.sh@69 -- # return 0 00:25:07.980 17:17:23 -- target/shutdown.sh@109 -- # killprocess 612930 00:25:07.980 17:17:23 -- common/autotest_common.sh@926 -- # '[' -z 612930 ']' 00:25:07.980 17:17:23 -- common/autotest_common.sh@930 -- # kill -0 612930 00:25:07.980 17:17:23 -- common/autotest_common.sh@931 -- # uname 00:25:07.980 17:17:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.980 17:17:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 612930 00:25:07.980 17:17:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:07.980 17:17:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:07.980 17:17:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 612930' 00:25:07.980 killing process with pid 612930 00:25:07.980 17:17:23 -- common/autotest_common.sh@945 -- # kill 612930 00:25:07.980 17:17:23 -- common/autotest_common.sh@950 -- # wait 612930 00:25:07.980 Received shutdown signal, test time was about 0.830048 seconds 00:25:07.980 00:25:07.980 Latency(us) 00:25:07.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.980 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme1n1 : 0.78 353.69 22.11 0.00 0.00 175686.96 19029.71 167772.16 00:25:07.980 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme2n1 : 0.82 385.80 24.11 0.00 0.00 152630.95 17670.45 125052.40 00:25:07.980 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme3n1 : 0.80 399.32 24.96 0.00 0.00 154352.29 9611.95 142140.30 00:25:07.980 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme4n1 : 0.81 334.46 20.90 0.00 0.00 172232.43 24078.41 150684.25 00:25:07.980 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme5n1 : 0.80 339.46 21.22 0.00 0.00 176803.17 20388.98 156121.32 00:25:07.980 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme6n1 : 0.80 393.66 24.60 0.00 0.00 150992.79 22913.33 124275.67 00:25:07.980 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme7n1 : 0.81 386.85 24.18 0.00 0.00 152840.78 13301.38 140586.86 00:25:07.980 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme8n1 : 0.79 400.63 25.04 0.00 0.00 144468.62 27962.03 125052.40 00:25:07.980 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme9n1 : 0.80 344.36 21.52 0.00 0.00 165953.31 27767.85 163888.55 00:25:07.980 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:07.980 Verification LBA range: start 0x0 length 0x400 00:25:07.980 Nvme10n1 : 0.83 379.88 23.74 0.00 0.00 143310.32 25049.32 123498.95 00:25:07.980 =================================================================================================================== 00:25:07.980 Total : 3718.11 232.38 0.00 0.00 158154.39 9611.95 167772.16 00:25:08.238 17:17:24 -- target/shutdown.sh@112 -- # sleep 1 00:25:09.168 17:17:25 -- target/shutdown.sh@113 -- # kill -0 612678 00:25:09.168 17:17:25 -- target/shutdown.sh@115 -- # stoptarget 00:25:09.168 17:17:25 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:09.168 17:17:25 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:09.168 17:17:25 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:09.168 17:17:25 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:09.168 17:17:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:09.168 17:17:25 -- nvmf/common.sh@116 -- # sync 00:25:09.168 17:17:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:09.168 17:17:25 -- nvmf/common.sh@119 -- # set +e 00:25:09.168 17:17:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:09.168 17:17:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:09.168 rmmod nvme_tcp 00:25:09.168 rmmod nvme_fabrics 00:25:09.168 rmmod nvme_keyring 00:25:09.169 17:17:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:09.169 17:17:25 -- nvmf/common.sh@123 -- # set -e 00:25:09.169 17:17:25 -- nvmf/common.sh@124 -- # return 0 00:25:09.169 17:17:25 -- nvmf/common.sh@477 -- # '[' -n 612678 ']' 00:25:09.169 17:17:25 -- nvmf/common.sh@478 -- # killprocess 612678 00:25:09.169 17:17:25 -- common/autotest_common.sh@926 -- # '[' -z 612678 ']' 00:25:09.169 17:17:25 -- common/autotest_common.sh@930 -- # kill -0 612678 00:25:09.425 17:17:25 -- common/autotest_common.sh@931 -- # uname 00:25:09.425 17:17:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:09.425 17:17:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 612678 00:25:09.425 17:17:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:09.425 17:17:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:09.425 17:17:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 612678' 00:25:09.425 killing process with pid 612678 00:25:09.425 17:17:25 -- common/autotest_common.sh@945 -- # kill 612678 00:25:09.425 17:17:25 -- common/autotest_common.sh@950 -- # wait 612678 00:25:09.683 17:17:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:09.683 17:17:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:09.683 17:17:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:09.683 17:17:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.683 17:17:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:09.683 17:17:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.683 17:17:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.683 17:17:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.225 17:17:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:12.225 00:25:12.225 real 0m8.100s 00:25:12.225 user 0m24.770s 00:25:12.225 sys 0m1.602s 00:25:12.225 17:17:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.225 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.225 ************************************ 00:25:12.225 END TEST nvmf_shutdown_tc2 00:25:12.225 ************************************ 00:25:12.225 17:17:27 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:12.225 17:17:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:12.225 17:17:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.225 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.225 ************************************ 00:25:12.225 START TEST nvmf_shutdown_tc3 00:25:12.225 ************************************ 00:25:12.225 17:17:27 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:12.225 17:17:27 -- target/shutdown.sh@120 -- # starttarget 00:25:12.225 17:17:27 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:12.225 17:17:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:12.225 17:17:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.225 17:17:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:12.225 17:17:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:12.225 17:17:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:12.225 17:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.225 17:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.225 17:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.225 17:17:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:12.225 17:17:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:12.225 17:17:27 -- common/autotest_common.sh@10 -- # set +x 00:25:12.225 17:17:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:12.225 17:17:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:12.225 17:17:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:12.225 17:17:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:12.225 17:17:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:12.225 17:17:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:12.225 17:17:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:12.225 17:17:27 -- nvmf/common.sh@294 -- # net_devs=() 00:25:12.225 17:17:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:12.225 17:17:27 -- nvmf/common.sh@295 -- # e810=() 00:25:12.225 17:17:27 -- nvmf/common.sh@295 -- # local -ga e810 00:25:12.225 17:17:27 -- nvmf/common.sh@296 -- # x722=() 00:25:12.225 17:17:27 -- nvmf/common.sh@296 -- # local -ga x722 00:25:12.225 17:17:27 -- nvmf/common.sh@297 -- # mlx=() 00:25:12.225 17:17:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:12.225 17:17:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.225 17:17:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:12.225 17:17:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:12.225 17:17:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:12.225 17:17:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:12.225 17:17:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:12.225 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:12.225 17:17:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:12.225 17:17:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:12.225 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:12.225 17:17:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:12.225 17:17:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:12.225 17:17:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.225 17:17:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:12.225 17:17:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.225 17:17:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:12.225 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:12.225 17:17:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.225 17:17:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:12.225 17:17:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.225 17:17:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:12.225 17:17:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.225 17:17:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:12.225 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:12.225 17:17:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.225 17:17:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:12.225 17:17:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:12.225 17:17:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:12.225 17:17:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:12.225 17:17:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.225 17:17:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.225 17:17:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.225 17:17:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:12.225 17:17:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.225 17:17:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.225 17:17:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:12.225 17:17:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.225 17:17:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.225 17:17:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:12.225 17:17:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:12.225 17:17:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.225 17:17:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.225 17:17:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.225 17:17:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.225 17:17:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:12.225 17:17:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.225 17:17:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.225 17:17:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.225 17:17:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:12.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:25:12.225 00:25:12.225 --- 10.0.0.2 ping statistics --- 00:25:12.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.225 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:25:12.225 17:17:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:25:12.225 00:25:12.225 --- 10.0.0.1 ping statistics --- 00:25:12.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.225 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:12.225 17:17:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.225 17:17:28 -- nvmf/common.sh@410 -- # return 0 00:25:12.225 17:17:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:12.225 17:17:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.225 17:17:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:12.225 17:17:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:12.225 17:17:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.225 17:17:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:12.225 17:17:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:12.225 17:17:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:12.225 17:17:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:12.225 17:17:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:12.225 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:25:12.225 17:17:28 -- nvmf/common.sh@469 -- # nvmfpid=613871 00:25:12.225 17:17:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:12.225 17:17:28 -- nvmf/common.sh@470 -- # waitforlisten 613871 00:25:12.225 17:17:28 -- common/autotest_common.sh@819 -- # '[' -z 613871 ']' 00:25:12.225 17:17:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.225 17:17:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:12.225 17:17:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.225 17:17:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:12.225 17:17:28 -- common/autotest_common.sh@10 -- # set +x 00:25:12.225 [2024-07-20 17:17:28.084756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:12.225 [2024-07-20 17:17:28.084881] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.225 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.225 [2024-07-20 17:17:28.151147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:12.225 [2024-07-20 17:17:28.238717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:12.225 [2024-07-20 17:17:28.238884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.225 [2024-07-20 17:17:28.238904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.225 [2024-07-20 17:17:28.238917] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.226 [2024-07-20 17:17:28.239005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.226 [2024-07-20 17:17:28.239067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:12.226 [2024-07-20 17:17:28.239099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:12.226 [2024-07-20 17:17:28.239101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.157 17:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:13.157 17:17:29 -- common/autotest_common.sh@852 -- # return 0 00:25:13.157 17:17:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:13.157 17:17:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:13.157 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.157 17:17:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.157 17:17:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.157 17:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.157 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.157 [2024-07-20 17:17:29.074433] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.157 17:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.157 17:17:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:13.157 17:17:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:13.157 17:17:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:13.157 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.157 17:17:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:13.157 17:17:29 -- target/shutdown.sh@28 -- # cat 00:25:13.157 17:17:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:13.157 17:17:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.157 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.157 Malloc1 00:25:13.157 [2024-07-20 17:17:29.149440] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.157 Malloc2 00:25:13.157 Malloc3 00:25:13.157 Malloc4 00:25:13.157 Malloc5 00:25:13.413 Malloc6 00:25:13.413 Malloc7 00:25:13.413 Malloc8 00:25:13.413 Malloc9 00:25:13.413 Malloc10 00:25:13.671 17:17:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.671 17:17:29 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:13.671 17:17:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:13.671 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.671 17:17:29 -- target/shutdown.sh@124 -- # perfpid=614058 00:25:13.671 17:17:29 -- target/shutdown.sh@125 -- # waitforlisten 614058 /var/tmp/bdevperf.sock 00:25:13.671 17:17:29 -- common/autotest_common.sh@819 -- # '[' -z 614058 ']' 00:25:13.671 17:17:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.671 17:17:29 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:13.671 17:17:29 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:13.671 17:17:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:13.671 17:17:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.671 17:17:29 -- nvmf/common.sh@520 -- # config=() 00:25:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.671 17:17:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:13.671 17:17:29 -- nvmf/common.sh@520 -- # local subsystem config 00:25:13.671 17:17:29 -- common/autotest_common.sh@10 -- # set +x 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:13.671 { 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme$subsystem", 00:25:13.671 "trtype": "$TEST_TRANSPORT", 00:25:13.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "$NVMF_PORT", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:13.671 "hdgst": ${hdgst:-false}, 00:25:13.671 "ddgst": ${ddgst:-false} 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 } 00:25:13.671 EOF 00:25:13.671 )") 00:25:13.671 17:17:29 -- nvmf/common.sh@542 -- # cat 00:25:13.671 17:17:29 -- nvmf/common.sh@544 -- # jq . 00:25:13.671 17:17:29 -- nvmf/common.sh@545 -- # IFS=, 00:25:13.671 17:17:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme1", 00:25:13.671 "trtype": "tcp", 00:25:13.671 "traddr": "10.0.0.2", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "4420", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:13.671 "hdgst": false, 00:25:13.671 "ddgst": false 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 },{ 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme2", 00:25:13.671 "trtype": "tcp", 00:25:13.671 "traddr": "10.0.0.2", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "4420", 00:25:13.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:13.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:13.671 "hdgst": false, 00:25:13.671 "ddgst": false 00:25:13.671 }, 00:25:13.671 "method": "bdev_nvme_attach_controller" 00:25:13.671 },{ 00:25:13.671 "params": { 00:25:13.671 "name": "Nvme3", 00:25:13.671 "trtype": "tcp", 00:25:13.671 "traddr": "10.0.0.2", 00:25:13.671 "adrfam": "ipv4", 00:25:13.671 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme4", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme5", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme6", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme7", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme8", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme9", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 },{ 00:25:13.672 "params": { 00:25:13.672 "name": "Nvme10", 00:25:13.672 "trtype": "tcp", 00:25:13.672 "traddr": "10.0.0.2", 00:25:13.672 "adrfam": "ipv4", 00:25:13.672 "trsvcid": "4420", 00:25:13.672 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:13.672 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:13.672 "hdgst": false, 00:25:13.672 "ddgst": false 00:25:13.672 }, 00:25:13.672 "method": "bdev_nvme_attach_controller" 00:25:13.672 }' 00:25:13.672 [2024-07-20 17:17:29.657398] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:13.672 [2024-07-20 17:17:29.657482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid614058 ] 00:25:13.672 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.672 [2024-07-20 17:17:29.721046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.672 [2024-07-20 17:17:29.804890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.044 Running I/O for 10 seconds... 00:25:15.302 17:17:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:15.302 17:17:31 -- common/autotest_common.sh@852 -- # return 0 00:25:15.302 17:17:31 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:15.302 17:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.302 17:17:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.302 17:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.302 17:17:31 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.302 17:17:31 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:15.302 17:17:31 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:15.302 17:17:31 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:15.302 17:17:31 -- target/shutdown.sh@57 -- # local ret=1 00:25:15.302 17:17:31 -- target/shutdown.sh@58 -- # local i 00:25:15.302 17:17:31 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:15.302 17:17:31 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:15.302 17:17:31 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:15.302 17:17:31 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:15.302 17:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.302 17:17:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.302 17:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.302 17:17:31 -- target/shutdown.sh@60 -- # read_io_count=87 00:25:15.302 17:17:31 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:25:15.302 17:17:31 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:15.560 17:17:31 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:15.560 17:17:31 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:15.560 17:17:31 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:15.560 17:17:31 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:15.560 17:17:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:15.560 17:17:31 -- common/autotest_common.sh@10 -- # set +x 00:25:15.560 17:17:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:15.560 17:17:31 -- target/shutdown.sh@60 -- # read_io_count=167 00:25:15.560 17:17:31 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:25:15.560 17:17:31 -- target/shutdown.sh@64 -- # ret=0 00:25:15.560 17:17:31 -- target/shutdown.sh@65 -- # break 00:25:15.560 17:17:31 -- target/shutdown.sh@69 -- # return 0 00:25:15.560 17:17:31 -- target/shutdown.sh@134 -- # killprocess 613871 00:25:15.560 17:17:31 -- common/autotest_common.sh@926 -- # '[' -z 613871 ']' 00:25:15.560 17:17:31 -- common/autotest_common.sh@930 -- # kill -0 613871 00:25:15.560 17:17:31 -- common/autotest_common.sh@931 -- # uname 00:25:15.834 17:17:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.834 17:17:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 613871 00:25:15.834 17:17:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:15.834 17:17:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:15.834 17:17:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 613871' 00:25:15.834 killing process with pid 613871 00:25:15.834 17:17:31 -- common/autotest_common.sh@945 -- # kill 613871 00:25:15.834 17:17:31 -- common/autotest_common.sh@950 -- # wait 613871 00:25:15.834 [2024-07-20 17:17:31.750071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.750982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaabff0 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.834 [2024-07-20 17:17:31.752875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.752994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.753320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae980 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with t[2024-07-20 17:17:31.757152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:25:15.835 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209d9b0 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with t[2024-07-20 17:17:31.757321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:25:15.835 id:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with t[2024-07-20 17:17:31.757337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:25:15.835 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.835 [2024-07-20 17:17:31.757392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.835 [2024-07-20 17:17:31.757404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206ee40 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.835 [2024-07-20 17:17:31.757936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.757958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.757979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.758128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac480 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.762262] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.836 [2024-07-20 17:17:31.762368] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.836 [2024-07-20 17:17:31.763506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.763990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.764329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaac930 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.765994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.766006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.766018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.836 [2024-07-20 17:17:31.766030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.766250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaacdc0 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.837 [2024-07-20 17:17:31.767636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.767886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad270 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.768375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209d9b0 (9): Bad file descriptor 00:25:15.838 [2024-07-20 17:17:31.768452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213c8e0 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.768619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209b0a0 is same with the state(5) to be set 00:25:15.838 [2024-07-20 17:17:31.768791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.838 [2024-07-20 17:17:31.768861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.838 [2024-07-20 17:17:31.768875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.839 [2024-07-20 17:17:31.768888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.768902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.839 [2024-07-20 17:17:31.768915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.768928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207c530 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.768954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ee40 (9): Bad file descriptor 00:25:15.839 [2024-07-20 17:17:31.769010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.839 [2024-07-20 17:17:31.769030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.769046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.839 [2024-07-20 17:17:31.769059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.769073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.839 [2024-07-20 17:17:31.769086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.769100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.839 [2024-07-20 17:17:31.769113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.769126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071ba0 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.769986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.770002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.770014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.770027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.770038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.770050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.770062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaad720 is same with the state(5) to be set 00:25:15.839 [2024-07-20 17:17:31.771643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.771968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.771982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.772003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.772018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.772033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.772047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.772062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.772076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.772092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.839 [2024-07-20 17:17:31.772106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.839 [2024-07-20 17:17:31.772121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.772979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.772995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.840 [2024-07-20 17:17:31.773643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.840 [2024-07-20 17:17:31.773752] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x219dc30 was disconnected and freed. reset controller. 00:25:15.840 [2024-07-20 17:17:31.774562] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.840 [2024-07-20 17:17:31.774813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.840 [2024-07-20 17:17:31.774849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.840 [2024-07-20 17:17:31.774862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.774993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.775615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaadbb0 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776278] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:15.841 [2024-07-20 17:17:31.776317] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2071ba0 (9): Bad file descriptor 00:25:15.841 [2024-07-20 17:17:31.776359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.776756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.841 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:12[2024-07-20 17:17:31.776774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 he state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.841 [2024-07-20 17:17:31.776881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.841 [2024-07-20 17:17:31.776888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.841 [2024-07-20 17:17:31.776893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.776906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.776931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.776943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.776956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.776969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.776981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20864 len:12he state(5) to be set 00:25:15.842 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.776995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.776997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 17:17:31.777061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 he state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.842 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23040 len:12he state(5) to be set 00:25:15.842 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 17:17:31.777220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 he state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777266] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777320] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23552 len:12he state(5) to be set 00:25:15.842 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.842 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23808 len:12[2024-07-20 17:17:31.777400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 he state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-20 17:17:31.777414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 he state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25600 len:1he state(5) to be set 00:25:15.842 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.842 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.842 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.842 [2024-07-20 17:17:31.777581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.842 [2024-07-20 17:17:31.777595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26240 len:12he state(5) to be set 00:25:15.842 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.842 [2024-07-20 17:17:31.777609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.842 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26368 len:12he state(5) to be set 00:25:15.843 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with t[2024-07-20 17:17:31.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:25:15.843 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.777661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.777676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.777692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae040 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.777707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.777983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.777996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.843 [2024-07-20 17:17:31.778384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 [2024-07-20 17:17:31.778399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219b810 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778489] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x219b810 was disconnected and freed. reset controller. 00:25:15.843 [2024-07-20 17:17:31.778498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778558] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.843 [2024-07-20 17:17:31.778572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with t[2024-07-20 17:17:31.778760] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.843 he state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778833] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.843 [2024-07-20 17:17:31.778820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.778997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.843 [2024-07-20 17:17:31.779093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-20 17:17:31.779106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.843 he state(5) to be set 00:25:15.843 [2024-07-20 17:17:31.779120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-20 17:17:31.779157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 he state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-20 17:17:31.779170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 he state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with t[2024-07-20 17:17:31.779187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:25:15.844 id:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with t[2024-07-20 17:17:31.779202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:25:15.844 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fcbeb0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c8e0 (9): Bad file descriptor 00:25:15.844 [2024-07-20 17:17:31.779264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209b0a0 (9): Bad file descriptor 00:25:15.844 [2024-07-20 17:17:31.779301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207c530 (9): Bad file descriptor 00:25:15.844 [2024-07-20 17:17:31.779325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaae4d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b06d0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211edc0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.779687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.844 [2024-07-20 17:17:31.779800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.779815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206a1c0 is same with the state(5) to be set 00:25:15.844 [2024-07-20 17:17:31.781273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.844 [2024-07-20 17:17:31.781976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.844 [2024-07-20 17:17:31.781999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.782970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.782984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.783226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.845 [2024-07-20 17:17:31.783241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222aae0 is same with the state(5) to be set 00:25:15.845 [2024-07-20 17:17:31.783336] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x222aae0 was disconnected and freed. reset controller. 00:25:15.845 [2024-07-20 17:17:31.783584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:15.845 [2024-07-20 17:17:31.784110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.845 [2024-07-20 17:17:31.784328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.845 [2024-07-20 17:17:31.784353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2071ba0 with addr=10.0.0.2, port=4420 00:25:15.845 [2024-07-20 17:17:31.784369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071ba0 is same with the state(5) to be set 00:25:15.845 [2024-07-20 17:17:31.784431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.845 [2024-07-20 17:17:31.784452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.784973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.784987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.785977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.785991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.786007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.846 [2024-07-20 17:17:31.786021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.846 [2024-07-20 17:17:31.786037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.786051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.786067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.786081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.786096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.786110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.786126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.786140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.786156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.786174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.786190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.786204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.786220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.792318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.792388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.792405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.792421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.792436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.792453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.792467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.792483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.792498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.792514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.792528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.792545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a6a0 is same with the state(5) to be set 00:25:15.847 [2024-07-20 17:17:31.795333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.847 [2024-07-20 17:17:31.795610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.847 [2024-07-20 17:17:31.795626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.860 [2024-07-20 17:17:31.795845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.860 [2024-07-20 17:17:31.795859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.795875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.795889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.795909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.795923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.795939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.795953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.795969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.795983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.796982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.796997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.861 [2024-07-20 17:17:31.797334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.861 [2024-07-20 17:17:31.797349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2231800 is same with the state(5) to be set 00:25:15.861 [2024-07-20 17:17:31.799447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.861 [2024-07-20 17:17:31.799482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:15.861 [2024-07-20 17:17:31.799853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.861 [2024-07-20 17:17:31.800083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.861 [2024-07-20 17:17:31.800110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207c530 with addr=10.0.0.2, port=4420 00:25:15.862 [2024-07-20 17:17:31.800128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207c530 is same with the state(5) to be set 00:25:15.862 [2024-07-20 17:17:31.800160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2071ba0 (9): Bad file descriptor 00:25:15.862 [2024-07-20 17:17:31.800217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcbeb0 (9): Bad file descriptor 00:25:15.862 [2024-07-20 17:17:31.800259] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.862 [2024-07-20 17:17:31.800300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b06d0 (9): Bad file descriptor 00:25:15.862 [2024-07-20 17:17:31.800332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211edc0 (9): Bad file descriptor 00:25:15.862 [2024-07-20 17:17:31.800362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206a1c0 (9): Bad file descriptor 00:25:15.862 [2024-07-20 17:17:31.800393] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.862 [2024-07-20 17:17:31.800418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207c530 (9): Bad file descriptor 00:25:15.862 [2024-07-20 17:17:31.800851] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.862 [2024-07-20 17:17:31.801041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:15.862 [2024-07-20 17:17:31.801541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.862 [2024-07-20 17:17:31.801753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.862 [2024-07-20 17:17:31.801778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206ee40 with addr=10.0.0.2, port=4420 00:25:15.862 [2024-07-20 17:17:31.801806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206ee40 is same with the state(5) to be set 00:25:15.862 [2024-07-20 17:17:31.802010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.862 [2024-07-20 17:17:31.802227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.862 [2024-07-20 17:17:31.802252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c8e0 with addr=10.0.0.2, port=4420 00:25:15.862 [2024-07-20 17:17:31.802267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213c8e0 is same with the state(5) to be set 00:25:15.862 [2024-07-20 17:17:31.802285] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:15.862 [2024-07-20 17:17:31.802298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:15.862 [2024-07-20 17:17:31.802316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:15.862 [2024-07-20 17:17:31.802656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.802971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.802985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.862 [2024-07-20 17:17:31.803822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.862 [2024-07-20 17:17:31.803838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.803853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.803869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.803884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.803900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.803914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.803931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.803945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.803961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.803979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.803996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.804655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.804670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cdb0 is same with the state(5) to be set 00:25:15.863 [2024-07-20 17:17:31.806242] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.863 [2024-07-20 17:17:31.806597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.863 [2024-07-20 17:17:31.806625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:15.863 [2024-07-20 17:17:31.806880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.863 [2024-07-20 17:17:31.807101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.863 [2024-07-20 17:17:31.807126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209d9b0 with addr=10.0.0.2, port=4420 00:25:15.863 [2024-07-20 17:17:31.807148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209d9b0 is same with the state(5) to be set 00:25:15.863 [2024-07-20 17:17:31.807172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ee40 (9): Bad file descriptor 00:25:15.863 [2024-07-20 17:17:31.807191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c8e0 (9): Bad file descriptor 00:25:15.863 [2024-07-20 17:17:31.807207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:15.863 [2024-07-20 17:17:31.807220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:15.863 [2024-07-20 17:17:31.807234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:15.863 [2024-07-20 17:17:31.807396] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:15.863 [2024-07-20 17:17:31.807436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.863 [2024-07-20 17:17:31.808086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.863 [2024-07-20 17:17:31.808294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.863 [2024-07-20 17:17:31.808321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209b0a0 with addr=10.0.0.2, port=4420 00:25:15.863 [2024-07-20 17:17:31.808337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209b0a0 is same with the state(5) to be set 00:25:15.863 [2024-07-20 17:17:31.808356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209d9b0 (9): Bad file descriptor 00:25:15.863 [2024-07-20 17:17:31.808373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.863 [2024-07-20 17:17:31.808386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.863 [2024-07-20 17:17:31.808399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.863 [2024-07-20 17:17:31.808418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:15.863 [2024-07-20 17:17:31.808432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:15.863 [2024-07-20 17:17:31.808445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:15.863 [2024-07-20 17:17:31.808777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.863 [2024-07-20 17:17:31.808806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.863 [2024-07-20 17:17:31.808825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209b0a0 (9): Bad file descriptor 00:25:15.863 [2024-07-20 17:17:31.808842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:15.863 [2024-07-20 17:17:31.808855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:15.863 [2024-07-20 17:17:31.808868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:15.863 [2024-07-20 17:17:31.808922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.863 [2024-07-20 17:17:31.808943] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:15.863 [2024-07-20 17:17:31.808956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:15.863 [2024-07-20 17:17:31.808969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:15.863 [2024-07-20 17:17:31.809023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.863 [2024-07-20 17:17:31.809617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.863 [2024-07-20 17:17:31.809943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.863 [2024-07-20 17:17:31.809958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.809973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.809990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.810981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.810995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.864 [2024-07-20 17:17:31.811574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.864 [2024-07-20 17:17:31.811593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.811608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.811622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222c0c0 is same with the state(5) to be set 00:25:15.865 [2024-07-20 17:17:31.812872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.812895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.812915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.812931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.812948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.812962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.812979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.812993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.813967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.813982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.865 [2024-07-20 17:17:31.814271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.865 [2024-07-20 17:17:31.814286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.814841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.814856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222d6a0 is same with the state(5) to be set 00:25:15.866 [2024-07-20 17:17:31.816099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.816972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.816986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.817016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.817047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.817077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.817107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.817137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.866 [2024-07-20 17:17:31.817172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.866 [2024-07-20 17:17:31.817188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.817981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.817996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.818011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.818025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.818041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.818056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.818070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222ec80 is same with the state(5) to be set 00:25:15.867 [2024-07-20 17:17:31.819278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.819973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.819989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.820003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.867 [2024-07-20 17:17:31.820019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.867 [2024-07-20 17:17:31.820034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.820974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.820988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.868 [2024-07-20 17:17:31.821267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.868 [2024-07-20 17:17:31.821281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230220 is same with the state(5) to be set 00:25:15.868 [2024-07-20 17:17:31.822848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:15.868 [2024-07-20 17:17:31.822881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:15.868 [2024-07-20 17:17:31.822902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:15.868 [2024-07-20 17:17:31.822920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:15.868 [2024-07-20 17:17:31.822938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:15.868 [2024-07-20 17:17:31.823079] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.868 task offset: 29184 on job bdev=Nvme4n1 fails 00:25:15.868 00:25:15.868 Latency(us) 00:25:15.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme1n1 ended in about 0.63 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme1n1 : 0.63 329.19 20.57 101.29 0.00 147513.83 89711.50 116508.44 00:25:15.868 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme2n1 ended in about 0.62 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme2n1 : 0.62 264.89 16.56 103.37 0.00 170339.94 96702.01 166995.44 00:25:15.868 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme3n1 ended in about 0.64 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme3n1 : 0.64 323.03 20.19 99.39 0.00 146861.32 90876.59 114955.00 00:25:15.868 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme4n1 ended in about 0.61 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme4n1 : 0.61 346.94 21.68 104.24 0.00 135558.44 4587.52 115731.72 00:25:15.868 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme5n1 ended in about 0.63 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme5n1 : 0.63 328.52 20.53 101.08 0.00 140796.74 71458.51 117285.17 00:25:15.868 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme6n1 ended in about 0.65 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme6n1 : 0.65 319.59 19.97 98.33 0.00 143169.03 53593.88 125052.40 00:25:15.868 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme7n1 ended in about 0.65 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme7n1 : 0.65 318.02 19.88 97.85 0.00 142099.61 70293.43 115731.72 00:25:15.868 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.868 Job: Nvme8n1 ended in about 0.66 seconds with error 00:25:15.868 Verification LBA range: start 0x0 length 0x400 00:25:15.868 Nvme8n1 : 0.66 316.47 19.78 97.38 0.00 141113.00 85827.89 115731.72 00:25:15.869 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.869 Job: Nvme9n1 ended in about 0.66 seconds with error 00:25:15.869 Verification LBA range: start 0x0 length 0x400 00:25:15.869 Nvme9n1 : 0.66 319.48 19.97 96.90 0.00 138607.10 14078.10 119615.34 00:25:15.869 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:15.869 Job: Nvme10n1 ended in about 0.64 seconds with error 00:25:15.869 Verification LBA range: start 0x0 length 0x400 00:25:15.869 Nvme10n1 : 0.64 257.63 16.10 100.54 0.00 158512.80 86216.25 126605.84 00:25:15.869 =================================================================================================================== 00:25:15.869 Total : 3123.76 195.23 1000.39 0.00 145828.65 4587.52 166995.44 00:25:15.869 [2024-07-20 17:17:31.849804] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:15.869 [2024-07-20 17:17:31.849902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:15.869 [2024-07-20 17:17:31.850409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.850685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.850713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2071ba0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.850735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2071ba0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.850985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.851202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.851228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207c530 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.851245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207c530 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.851452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.851670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.851695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x211edc0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.851727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211edc0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.851936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.852137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.852163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b06d0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.852179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b06d0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.852382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.852785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.852815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fcbeb0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.852831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fcbeb0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.853968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:15.869 [2024-07-20 17:17:31.853998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.869 [2024-07-20 17:17:31.854017] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:15.869 [2024-07-20 17:17:31.854033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:15.869 [2024-07-20 17:17:31.854298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.854515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.854540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206a1c0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.854557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206a1c0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.854583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2071ba0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.854607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207c530 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.854625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211edc0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.854641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21b06d0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.854658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcbeb0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.854713] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.869 [2024-07-20 17:17:31.854735] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.869 [2024-07-20 17:17:31.854756] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.869 [2024-07-20 17:17:31.854774] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.869 [2024-07-20 17:17:31.854799] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:15.869 [2024-07-20 17:17:31.855104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.855308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.855333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c8e0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.855349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213c8e0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.855555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.855785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.855818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x206ee40 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.855835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206ee40 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.856058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.856261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.856286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209d9b0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.856301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209d9b0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.856501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.856703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.869 [2024-07-20 17:17:31.856727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x209b0a0 with addr=10.0.0.2, port=4420 00:25:15.869 [2024-07-20 17:17:31.856742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x209b0a0 is same with the state(5) to be set 00:25:15.869 [2024-07-20 17:17:31.856761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206a1c0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.856779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.856799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.856820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:15.869 [2024-07-20 17:17:31.856852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.856867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.856879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:15.869 [2024-07-20 17:17:31.856896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.856911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.856924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:15.869 [2024-07-20 17:17:31.856947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.856961] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.856973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:15.869 [2024-07-20 17:17:31.856989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.857002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.857015] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:15.869 [2024-07-20 17:17:31.857098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857132] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c8e0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.857195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206ee40 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.857212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209d9b0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.857229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x209b0a0 (9): Bad file descriptor 00:25:15.869 [2024-07-20 17:17:31.857244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.857256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.857269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:15.869 [2024-07-20 17:17:31.857307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.857339] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.857352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:15.869 [2024-07-20 17:17:31.857369] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.857382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.857395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.869 [2024-07-20 17:17:31.857411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.857425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.857438] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:15.869 [2024-07-20 17:17:31.857453] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:15.869 [2024-07-20 17:17:31.857466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:15.869 [2024-07-20 17:17:31.857479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:15.869 [2024-07-20 17:17:31.857521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.869 [2024-07-20 17:17:31.857539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.870 [2024-07-20 17:17:31.857551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.870 [2024-07-20 17:17:31.857564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:16.435 17:17:32 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:16.435 17:17:32 -- target/shutdown.sh@138 -- # sleep 1 00:25:17.369 17:17:33 -- target/shutdown.sh@141 -- # kill -9 614058 00:25:17.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (614058) - No such process 00:25:17.369 17:17:33 -- target/shutdown.sh@141 -- # true 00:25:17.369 17:17:33 -- target/shutdown.sh@143 -- # stoptarget 00:25:17.369 17:17:33 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:17.369 17:17:33 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:17.369 17:17:33 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:17.369 17:17:33 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:17.369 17:17:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:17.369 17:17:33 -- nvmf/common.sh@116 -- # sync 00:25:17.369 17:17:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:17.369 17:17:33 -- nvmf/common.sh@119 -- # set +e 00:25:17.369 17:17:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:17.369 17:17:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:17.369 rmmod nvme_tcp 00:25:17.369 rmmod nvme_fabrics 00:25:17.369 rmmod nvme_keyring 00:25:17.369 17:17:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:17.369 17:17:33 -- nvmf/common.sh@123 -- # set -e 00:25:17.369 17:17:33 -- nvmf/common.sh@124 -- # return 0 00:25:17.369 17:17:33 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:17.369 17:17:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:17.369 17:17:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:17.369 17:17:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:17.369 17:17:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.369 17:17:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:17.369 17:17:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.369 17:17:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.369 17:17:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.275 17:17:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:19.275 00:25:19.275 real 0m7.536s 00:25:19.275 user 0m18.536s 00:25:19.275 sys 0m1.403s 00:25:19.275 17:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.275 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.275 ************************************ 00:25:19.275 END TEST nvmf_shutdown_tc3 00:25:19.275 ************************************ 00:25:19.534 17:17:35 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:19.534 00:25:19.534 real 0m28.046s 00:25:19.534 user 1m19.200s 00:25:19.534 sys 0m6.389s 00:25:19.534 17:17:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.534 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.534 ************************************ 00:25:19.534 END TEST nvmf_shutdown 00:25:19.534 ************************************ 00:25:19.534 17:17:35 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:19.534 17:17:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:19.534 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.534 17:17:35 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:19.534 17:17:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:19.534 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.534 17:17:35 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:19.534 17:17:35 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:19.534 17:17:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:19.534 17:17:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:19.534 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:25:19.534 ************************************ 00:25:19.534 START TEST nvmf_multicontroller 00:25:19.534 ************************************ 00:25:19.534 17:17:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:19.534 * Looking for test storage... 00:25:19.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.534 17:17:35 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.534 17:17:35 -- nvmf/common.sh@7 -- # uname -s 00:25:19.534 17:17:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.534 17:17:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.534 17:17:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.534 17:17:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.534 17:17:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.534 17:17:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.534 17:17:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.534 17:17:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.534 17:17:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.534 17:17:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.534 17:17:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.534 17:17:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.534 17:17:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.534 17:17:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.534 17:17:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.534 17:17:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.534 17:17:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.534 17:17:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.534 17:17:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.534 17:17:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.534 17:17:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.534 17:17:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.534 17:17:35 -- paths/export.sh@5 -- # export PATH 00:25:19.534 17:17:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.534 17:17:35 -- nvmf/common.sh@46 -- # : 0 00:25:19.534 17:17:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:19.534 17:17:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:19.534 17:17:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:19.534 17:17:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.534 17:17:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.534 17:17:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:19.534 17:17:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:19.534 17:17:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:19.534 17:17:35 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:19.534 17:17:35 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:19.534 17:17:35 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:19.534 17:17:35 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:19.534 17:17:35 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:19.534 17:17:35 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:19.534 17:17:35 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:19.534 17:17:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:19.534 17:17:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.534 17:17:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:19.534 17:17:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:19.534 17:17:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:19.534 17:17:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.534 17:17:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.534 17:17:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.534 17:17:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:19.534 17:17:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:19.534 17:17:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:19.534 17:17:35 -- common/autotest_common.sh@10 -- # set +x 00:25:21.432 17:17:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.432 17:17:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:21.432 17:17:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:21.432 17:17:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:21.432 17:17:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:21.432 17:17:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:21.432 17:17:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:21.432 17:17:37 -- nvmf/common.sh@294 -- # net_devs=() 00:25:21.432 17:17:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:21.432 17:17:37 -- nvmf/common.sh@295 -- # e810=() 00:25:21.432 17:17:37 -- nvmf/common.sh@295 -- # local -ga e810 00:25:21.432 17:17:37 -- nvmf/common.sh@296 -- # x722=() 00:25:21.432 17:17:37 -- nvmf/common.sh@296 -- # local -ga x722 00:25:21.432 17:17:37 -- nvmf/common.sh@297 -- # mlx=() 00:25:21.432 17:17:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:21.432 17:17:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.432 17:17:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:21.432 17:17:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:21.432 17:17:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:21.432 17:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.432 17:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.432 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.432 17:17:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.432 17:17:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.432 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.432 17:17:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:21.432 17:17:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.432 17:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.432 17:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.432 17:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.432 17:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.432 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.432 17:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.432 17:17:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.432 17:17:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.432 17:17:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.432 17:17:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.432 17:17:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.432 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.432 17:17:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.432 17:17:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:21.432 17:17:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:21.432 17:17:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:21.432 17:17:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:21.432 17:17:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.432 17:17:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.432 17:17:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.432 17:17:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:21.432 17:17:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.432 17:17:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.432 17:17:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:21.432 17:17:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.432 17:17:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.432 17:17:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:21.432 17:17:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:21.432 17:17:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.432 17:17:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.690 17:17:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.690 17:17:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.690 17:17:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:21.690 17:17:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.690 17:17:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.690 17:17:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.690 17:17:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:21.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:25:21.690 00:25:21.690 --- 10.0.0.2 ping statistics --- 00:25:21.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.690 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:25:21.690 17:17:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:25:21.690 00:25:21.690 --- 10.0.0.1 ping statistics --- 00:25:21.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.690 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:25:21.690 17:17:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.690 17:17:37 -- nvmf/common.sh@410 -- # return 0 00:25:21.690 17:17:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:21.690 17:17:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.690 17:17:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:21.690 17:17:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:21.690 17:17:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.690 17:17:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:21.690 17:17:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:21.690 17:17:37 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:21.690 17:17:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.690 17:17:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:21.690 17:17:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 17:17:37 -- nvmf/common.sh@469 -- # nvmfpid=616479 00:25:21.690 17:17:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:21.690 17:17:37 -- nvmf/common.sh@470 -- # waitforlisten 616479 00:25:21.690 17:17:37 -- common/autotest_common.sh@819 -- # '[' -z 616479 ']' 00:25:21.690 17:17:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.690 17:17:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.690 17:17:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.690 17:17:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.690 17:17:37 -- common/autotest_common.sh@10 -- # set +x 00:25:21.690 [2024-07-20 17:17:37.745007] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:21.690 [2024-07-20 17:17:37.745091] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.690 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.690 [2024-07-20 17:17:37.809704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:21.947 [2024-07-20 17:17:37.893913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:21.947 [2024-07-20 17:17:37.894099] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.947 [2024-07-20 17:17:37.894118] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.947 [2024-07-20 17:17:37.894132] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.947 [2024-07-20 17:17:37.894191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.947 [2024-07-20 17:17:37.894230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.947 [2024-07-20 17:17:37.894233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.880 17:17:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.880 17:17:38 -- common/autotest_common.sh@852 -- # return 0 00:25:22.880 17:17:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:22.880 17:17:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.880 17:17:38 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 [2024-07-20 17:17:38.747573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 Malloc0 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 [2024-07-20 17:17:38.817267] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 [2024-07-20 17:17:38.825117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 Malloc1 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:22.880 17:17:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 17:17:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.880 17:17:38 -- host/multicontroller.sh@44 -- # bdevperf_pid=616634 00:25:22.880 17:17:38 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.880 17:17:38 -- host/multicontroller.sh@47 -- # waitforlisten 616634 /var/tmp/bdevperf.sock 00:25:22.880 17:17:38 -- common/autotest_common.sh@819 -- # '[' -z 616634 ']' 00:25:22.880 17:17:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.880 17:17:38 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:22.880 17:17:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:22.880 17:17:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.880 17:17:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:22.880 17:17:38 -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 17:17:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:23.813 17:17:39 -- common/autotest_common.sh@852 -- # return 0 00:25:23.813 17:17:39 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:23.813 17:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.813 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:23.813 NVMe0n1 00:25:23.813 17:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.813 17:17:39 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.813 17:17:39 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:23.813 17:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.813 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.072 17:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.072 1 00:25:24.072 17:17:39 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:24.072 17:17:39 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.072 17:17:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:24.072 17:17:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:24.072 17:17:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.072 17:17:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:24.072 17:17:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.072 17:17:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:24.072 17:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.072 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.072 request: 00:25:24.072 { 00:25:24.072 "name": "NVMe0", 00:25:24.072 "trtype": "tcp", 00:25:24.072 "traddr": "10.0.0.2", 00:25:24.072 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:24.072 "hostaddr": "10.0.0.2", 00:25:24.072 "hostsvcid": "60000", 00:25:24.072 "adrfam": "ipv4", 00:25:24.072 "trsvcid": "4420", 00:25:24.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.072 "method": "bdev_nvme_attach_controller", 00:25:24.072 "req_id": 1 00:25:24.072 } 00:25:24.072 Got JSON-RPC error response 00:25:24.072 response: 00:25:24.072 { 00:25:24.072 "code": -114, 00:25:24.072 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:24.072 } 00:25:24.072 17:17:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:24.072 17:17:39 -- common/autotest_common.sh@643 -- # es=1 00:25:24.072 17:17:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:24.072 17:17:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:24.072 17:17:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:24.072 17:17:39 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:24.073 17:17:39 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.073 17:17:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:24.073 17:17:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:24.073 17:17:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.073 17:17:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:24.073 17:17:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.073 17:17:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:24.073 17:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.073 17:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:24.073 request: 00:25:24.073 { 00:25:24.073 "name": "NVMe0", 00:25:24.073 "trtype": "tcp", 00:25:24.073 "traddr": "10.0.0.2", 00:25:24.073 "hostaddr": "10.0.0.2", 00:25:24.073 "hostsvcid": "60000", 00:25:24.073 "adrfam": "ipv4", 00:25:24.073 "trsvcid": "4420", 00:25:24.073 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:24.073 "method": "bdev_nvme_attach_controller", 00:25:24.073 "req_id": 1 00:25:24.073 } 00:25:24.073 Got JSON-RPC error response 00:25:24.073 response: 00:25:24.073 { 00:25:24.073 "code": -114, 00:25:24.073 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:24.073 } 00:25:24.073 17:17:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:24.073 17:17:40 -- common/autotest_common.sh@643 -- # es=1 00:25:24.073 17:17:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:24.073 17:17:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:24.073 17:17:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:24.073 17:17:40 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.073 17:17:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:24.073 17:17:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.073 17:17:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:24.073 17:17:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.073 17:17:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.073 request: 00:25:24.073 { 00:25:24.073 "name": "NVMe0", 00:25:24.073 "trtype": "tcp", 00:25:24.073 "traddr": "10.0.0.2", 00:25:24.073 "hostaddr": "10.0.0.2", 00:25:24.073 "hostsvcid": "60000", 00:25:24.073 "adrfam": "ipv4", 00:25:24.073 "trsvcid": "4420", 00:25:24.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.073 "multipath": "disable", 00:25:24.073 "method": "bdev_nvme_attach_controller", 00:25:24.073 "req_id": 1 00:25:24.073 } 00:25:24.073 Got JSON-RPC error response 00:25:24.073 response: 00:25:24.073 { 00:25:24.073 "code": -114, 00:25:24.073 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:24.073 } 00:25:24.073 17:17:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:24.073 17:17:40 -- common/autotest_common.sh@643 -- # es=1 00:25:24.073 17:17:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:24.073 17:17:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:24.073 17:17:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:24.073 17:17:40 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:24.073 17:17:40 -- common/autotest_common.sh@640 -- # local es=0 00:25:24.073 17:17:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:24.073 17:17:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:24.073 17:17:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.073 17:17:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:24.073 17:17:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:24.073 17:17:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:24.073 17:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.073 request: 00:25:24.073 { 00:25:24.073 "name": "NVMe0", 00:25:24.073 "trtype": "tcp", 00:25:24.073 "traddr": "10.0.0.2", 00:25:24.073 "hostaddr": "10.0.0.2", 00:25:24.073 "hostsvcid": "60000", 00:25:24.073 "adrfam": "ipv4", 00:25:24.073 "trsvcid": "4420", 00:25:24.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.073 "multipath": "failover", 00:25:24.073 "method": "bdev_nvme_attach_controller", 00:25:24.073 "req_id": 1 00:25:24.073 } 00:25:24.073 Got JSON-RPC error response 00:25:24.073 response: 00:25:24.073 { 00:25:24.073 "code": -114, 00:25:24.073 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:24.073 } 00:25:24.073 17:17:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:24.073 17:17:40 -- common/autotest_common.sh@643 -- # es=1 00:25:24.073 17:17:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:24.073 17:17:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:24.073 17:17:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:24.073 17:17:40 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:24.073 17:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.073 00:25:24.073 17:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.073 17:17:40 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:24.073 17:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.073 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.331 17:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.331 17:17:40 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:24.331 17:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.331 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.331 00:25:24.331 17:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.331 17:17:40 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:24.331 17:17:40 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:24.331 17:17:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.331 17:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.331 17:17:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.331 17:17:40 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:24.331 17:17:40 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.706 0 00:25:25.706 17:17:41 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:25.706 17:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.706 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.706 17:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.706 17:17:41 -- host/multicontroller.sh@100 -- # killprocess 616634 00:25:25.706 17:17:41 -- common/autotest_common.sh@926 -- # '[' -z 616634 ']' 00:25:25.706 17:17:41 -- common/autotest_common.sh@930 -- # kill -0 616634 00:25:25.706 17:17:41 -- common/autotest_common.sh@931 -- # uname 00:25:25.706 17:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:25.706 17:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 616634 00:25:25.706 17:17:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:25.706 17:17:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:25.706 17:17:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 616634' 00:25:25.707 killing process with pid 616634 00:25:25.707 17:17:41 -- common/autotest_common.sh@945 -- # kill 616634 00:25:25.707 17:17:41 -- common/autotest_common.sh@950 -- # wait 616634 00:25:25.964 17:17:41 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:25.964 17:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.964 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.964 17:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.964 17:17:41 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:25.964 17:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.964 17:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:25.964 17:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.964 17:17:41 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:25.964 17:17:41 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.964 17:17:41 -- common/autotest_common.sh@1597 -- # read -r file 00:25:25.964 17:17:41 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:25.964 17:17:41 -- common/autotest_common.sh@1596 -- # sort -u 00:25:25.964 17:17:41 -- common/autotest_common.sh@1598 -- # cat 00:25:25.964 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:25.964 [2024-07-20 17:17:38.928267] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:25.965 [2024-07-20 17:17:38.928346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616634 ] 00:25:25.965 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.965 [2024-07-20 17:17:38.988397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.965 [2024-07-20 17:17:39.072633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.965 [2024-07-20 17:17:40.464271] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 25c0c795-67ac-4ab7-bacd-962c1daca2bd already exists 00:25:25.965 [2024-07-20 17:17:40.464316] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:25c0c795-67ac-4ab7-bacd-962c1daca2bd alias for bdev NVMe1n1 00:25:25.965 [2024-07-20 17:17:40.464334] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:25.965 Running I/O for 1 seconds... 00:25:25.965 00:25:25.965 Latency(us) 00:25:25.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.965 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:25.965 NVMe0n1 : 1.01 15748.38 61.52 0.00 0.00 8106.21 2026.76 18155.90 00:25:25.965 =================================================================================================================== 00:25:25.965 Total : 15748.38 61.52 0.00 0.00 8106.21 2026.76 18155.90 00:25:25.965 Received shutdown signal, test time was about 1.000000 seconds 00:25:25.965 00:25:25.965 Latency(us) 00:25:25.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.965 =================================================================================================================== 00:25:25.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.965 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:25.965 17:17:41 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:25.965 17:17:41 -- common/autotest_common.sh@1597 -- # read -r file 00:25:25.965 17:17:41 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:25.965 17:17:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:25.965 17:17:41 -- nvmf/common.sh@116 -- # sync 00:25:25.965 17:17:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:25.965 17:17:41 -- nvmf/common.sh@119 -- # set +e 00:25:25.965 17:17:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:25.965 17:17:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:25.965 rmmod nvme_tcp 00:25:25.965 rmmod nvme_fabrics 00:25:25.965 rmmod nvme_keyring 00:25:25.965 17:17:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:25.965 17:17:41 -- nvmf/common.sh@123 -- # set -e 00:25:25.965 17:17:41 -- nvmf/common.sh@124 -- # return 0 00:25:25.965 17:17:41 -- nvmf/common.sh@477 -- # '[' -n 616479 ']' 00:25:25.965 17:17:41 -- nvmf/common.sh@478 -- # killprocess 616479 00:25:25.965 17:17:41 -- common/autotest_common.sh@926 -- # '[' -z 616479 ']' 00:25:25.965 17:17:41 -- common/autotest_common.sh@930 -- # kill -0 616479 00:25:25.965 17:17:41 -- common/autotest_common.sh@931 -- # uname 00:25:25.965 17:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:25.965 17:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 616479 00:25:25.965 17:17:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:25.965 17:17:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:25.965 17:17:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 616479' 00:25:25.965 killing process with pid 616479 00:25:25.965 17:17:42 -- common/autotest_common.sh@945 -- # kill 616479 00:25:25.965 17:17:42 -- common/autotest_common.sh@950 -- # wait 616479 00:25:26.225 17:17:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:26.225 17:17:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:26.225 17:17:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:26.225 17:17:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.225 17:17:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:26.225 17:17:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.225 17:17:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.225 17:17:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.752 17:17:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:28.752 00:25:28.752 real 0m8.846s 00:25:28.752 user 0m17.240s 00:25:28.752 sys 0m2.281s 00:25:28.752 17:17:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.752 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.752 ************************************ 00:25:28.752 END TEST nvmf_multicontroller 00:25:28.752 ************************************ 00:25:28.752 17:17:44 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:28.752 17:17:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:28.752 17:17:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:28.752 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:25:28.752 ************************************ 00:25:28.752 START TEST nvmf_aer 00:25:28.752 ************************************ 00:25:28.752 17:17:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:28.752 * Looking for test storage... 00:25:28.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.752 17:17:44 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.752 17:17:44 -- nvmf/common.sh@7 -- # uname -s 00:25:28.752 17:17:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.752 17:17:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.752 17:17:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.752 17:17:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.752 17:17:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.752 17:17:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.752 17:17:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.752 17:17:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.752 17:17:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.752 17:17:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.752 17:17:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.752 17:17:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.752 17:17:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.752 17:17:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.752 17:17:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.752 17:17:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.752 17:17:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.752 17:17:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.752 17:17:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.752 17:17:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 17:17:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 17:17:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 17:17:44 -- paths/export.sh@5 -- # export PATH 00:25:28.752 17:17:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 17:17:44 -- nvmf/common.sh@46 -- # : 0 00:25:28.752 17:17:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:28.752 17:17:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:28.752 17:17:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:28.752 17:17:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.752 17:17:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.752 17:17:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:28.752 17:17:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:28.752 17:17:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:28.752 17:17:44 -- host/aer.sh@11 -- # nvmftestinit 00:25:28.752 17:17:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:28.752 17:17:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.752 17:17:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:28.752 17:17:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:28.752 17:17:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:28.752 17:17:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.752 17:17:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.752 17:17:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.752 17:17:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:28.752 17:17:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:28.752 17:17:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:28.752 17:17:44 -- common/autotest_common.sh@10 -- # set +x 00:25:30.650 17:17:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:30.650 17:17:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:30.650 17:17:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:30.650 17:17:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:30.650 17:17:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:30.650 17:17:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:30.650 17:17:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:30.650 17:17:46 -- nvmf/common.sh@294 -- # net_devs=() 00:25:30.650 17:17:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:30.650 17:17:46 -- nvmf/common.sh@295 -- # e810=() 00:25:30.650 17:17:46 -- nvmf/common.sh@295 -- # local -ga e810 00:25:30.650 17:17:46 -- nvmf/common.sh@296 -- # x722=() 00:25:30.650 17:17:46 -- nvmf/common.sh@296 -- # local -ga x722 00:25:30.650 17:17:46 -- nvmf/common.sh@297 -- # mlx=() 00:25:30.650 17:17:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:30.650 17:17:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.650 17:17:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:30.650 17:17:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:30.650 17:17:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:30.650 17:17:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:30.650 17:17:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:30.650 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:30.650 17:17:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:30.650 17:17:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:30.650 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:30.650 17:17:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:30.650 17:17:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:30.650 17:17:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.650 17:17:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:30.650 17:17:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.650 17:17:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:30.650 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:30.650 17:17:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.650 17:17:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:30.650 17:17:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.650 17:17:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:30.650 17:17:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.650 17:17:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:30.650 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:30.650 17:17:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.650 17:17:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:30.650 17:17:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:30.650 17:17:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:30.650 17:17:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.650 17:17:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.650 17:17:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.650 17:17:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:30.650 17:17:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.650 17:17:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.650 17:17:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:30.650 17:17:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.650 17:17:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.650 17:17:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:30.650 17:17:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:30.650 17:17:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.650 17:17:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.650 17:17:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.650 17:17:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.650 17:17:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:30.650 17:17:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.650 17:17:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.650 17:17:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.650 17:17:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:30.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:25:30.650 00:25:30.650 --- 10.0.0.2 ping statistics --- 00:25:30.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.650 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:25:30.650 17:17:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:25:30.650 00:25:30.650 --- 10.0.0.1 ping statistics --- 00:25:30.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.650 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:25:30.650 17:17:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.650 17:17:46 -- nvmf/common.sh@410 -- # return 0 00:25:30.650 17:17:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:30.650 17:17:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.650 17:17:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:30.650 17:17:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.650 17:17:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:30.650 17:17:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:30.650 17:17:46 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:30.650 17:17:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:30.650 17:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:30.650 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:30.650 17:17:46 -- nvmf/common.sh@469 -- # nvmfpid=619008 00:25:30.650 17:17:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:30.650 17:17:46 -- nvmf/common.sh@470 -- # waitforlisten 619008 00:25:30.650 17:17:46 -- common/autotest_common.sh@819 -- # '[' -z 619008 ']' 00:25:30.650 17:17:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.650 17:17:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:30.650 17:17:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.650 17:17:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:30.650 17:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:30.650 [2024-07-20 17:17:46.567331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:30.650 [2024-07-20 17:17:46.567416] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.650 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.650 [2024-07-20 17:17:46.631000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.650 [2024-07-20 17:17:46.715452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.650 [2024-07-20 17:17:46.715601] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.650 [2024-07-20 17:17:46.715619] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.650 [2024-07-20 17:17:46.715632] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.650 [2024-07-20 17:17:46.715689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.650 [2024-07-20 17:17:46.715821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.650 [2024-07-20 17:17:46.715847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.650 [2024-07-20 17:17:46.715850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.630 17:17:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:31.630 17:17:47 -- common/autotest_common.sh@852 -- # return 0 00:25:31.630 17:17:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:31.630 17:17:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 17:17:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.630 17:17:47 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.630 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 [2024-07-20 17:17:47.595606] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.630 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.630 17:17:47 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:31.630 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 Malloc0 00:25:31.630 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.630 17:17:47 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:31.630 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.630 17:17:47 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.630 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.630 17:17:47 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.630 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 [2024-07-20 17:17:47.649238] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.630 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.630 17:17:47 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:31.630 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.630 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.630 [2024-07-20 17:17:47.656940] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:31.630 [ 00:25:31.630 { 00:25:31.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:31.630 "subtype": "Discovery", 00:25:31.630 "listen_addresses": [], 00:25:31.630 "allow_any_host": true, 00:25:31.630 "hosts": [] 00:25:31.630 }, 00:25:31.630 { 00:25:31.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.630 "subtype": "NVMe", 00:25:31.630 "listen_addresses": [ 00:25:31.630 { 00:25:31.630 "transport": "TCP", 00:25:31.630 "trtype": "TCP", 00:25:31.630 "adrfam": "IPv4", 00:25:31.630 "traddr": "10.0.0.2", 00:25:31.630 "trsvcid": "4420" 00:25:31.630 } 00:25:31.630 ], 00:25:31.630 "allow_any_host": true, 00:25:31.630 "hosts": [], 00:25:31.630 "serial_number": "SPDK00000000000001", 00:25:31.630 "model_number": "SPDK bdev Controller", 00:25:31.630 "max_namespaces": 2, 00:25:31.630 "min_cntlid": 1, 00:25:31.630 "max_cntlid": 65519, 00:25:31.630 "namespaces": [ 00:25:31.630 { 00:25:31.630 "nsid": 1, 00:25:31.630 "bdev_name": "Malloc0", 00:25:31.630 "name": "Malloc0", 00:25:31.630 "nguid": "7CF399083F1247D5839CDAEA957C3006", 00:25:31.630 "uuid": "7cf39908-3f12-47d5-839c-daea957c3006" 00:25:31.630 } 00:25:31.630 ] 00:25:31.630 } 00:25:31.630 ] 00:25:31.630 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.630 17:17:47 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:31.630 17:17:47 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:31.630 17:17:47 -- host/aer.sh@33 -- # aerpid=619164 00:25:31.630 17:17:47 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:31.630 17:17:47 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:31.630 17:17:47 -- common/autotest_common.sh@1244 -- # local i=0 00:25:31.630 17:17:47 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.630 17:17:47 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:31.630 17:17:47 -- common/autotest_common.sh@1247 -- # i=1 00:25:31.630 17:17:47 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:31.630 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.630 17:17:47 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.630 17:17:47 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:31.630 17:17:47 -- common/autotest_common.sh@1247 -- # i=2 00:25:31.630 17:17:47 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:31.888 17:17:47 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.888 17:17:47 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:31.888 17:17:47 -- common/autotest_common.sh@1255 -- # return 0 00:25:31.888 17:17:47 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:31.888 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.888 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.888 Malloc1 00:25:31.888 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.888 17:17:47 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:31.888 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.888 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.888 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.888 17:17:47 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:31.888 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.888 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.888 [ 00:25:31.888 { 00:25:31.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:31.888 "subtype": "Discovery", 00:25:31.888 "listen_addresses": [], 00:25:31.888 "allow_any_host": true, 00:25:31.888 "hosts": [] 00:25:31.888 }, 00:25:31.888 { 00:25:31.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.888 "subtype": "NVMe", 00:25:31.888 "listen_addresses": [ 00:25:31.888 { 00:25:31.888 "transport": "TCP", 00:25:31.888 "trtype": "TCP", 00:25:31.888 "adrfam": "IPv4", 00:25:31.888 "traddr": "10.0.0.2", 00:25:31.888 "trsvcid": "4420" 00:25:31.888 } 00:25:31.888 ], 00:25:31.888 "allow_any_host": true, 00:25:31.888 "hosts": [], 00:25:31.888 "serial_number": "SPDK00000000000001", 00:25:31.888 "model_number": "SPDK bdev Controller", 00:25:31.888 "max_namespaces": 2, 00:25:31.888 "min_cntlid": 1, 00:25:31.888 "max_cntlid": 65519, 00:25:31.888 "namespaces": [ 00:25:31.888 { 00:25:31.888 "nsid": 1, 00:25:31.888 "bdev_name": "Malloc0", 00:25:31.888 "name": "Malloc0", 00:25:31.888 "nguid": "7CF399083F1247D5839CDAEA957C3006", 00:25:31.888 "uuid": "7cf39908-3f12-47d5-839c-daea957c3006" 00:25:31.888 }, 00:25:31.888 { 00:25:31.888 "nsid": 2, 00:25:31.888 "bdev_name": "Malloc1", 00:25:31.888 "name": "Malloc1", 00:25:31.888 "nguid": "00B246008B63488DBCB036C23778E8F4", 00:25:31.888 "uuid": "00b24600-8b63-488d-bcb0-36c23778e8f4" 00:25:31.888 } 00:25:31.888 ] 00:25:31.888 } 00:25:31.888 ] 00:25:31.888 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.888 17:17:47 -- host/aer.sh@43 -- # wait 619164 00:25:31.888 Asynchronous Event Request test 00:25:31.888 Attaching to 10.0.0.2 00:25:31.888 Attached to 10.0.0.2 00:25:31.888 Registering asynchronous event callbacks... 00:25:31.888 Starting namespace attribute notice tests for all controllers... 00:25:31.888 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:31.888 aer_cb - Changed Namespace 00:25:31.888 Cleaning up... 00:25:31.888 17:17:47 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:31.888 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.888 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.888 17:17:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.888 17:17:47 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:31.888 17:17:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.888 17:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:31.888 17:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.888 17:17:48 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.888 17:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.888 17:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:31.888 17:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.888 17:17:48 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:31.888 17:17:48 -- host/aer.sh@51 -- # nvmftestfini 00:25:31.888 17:17:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:31.888 17:17:48 -- nvmf/common.sh@116 -- # sync 00:25:31.888 17:17:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:31.888 17:17:48 -- nvmf/common.sh@119 -- # set +e 00:25:31.888 17:17:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:31.888 17:17:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:31.888 rmmod nvme_tcp 00:25:32.146 rmmod nvme_fabrics 00:25:32.146 rmmod nvme_keyring 00:25:32.146 17:17:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:32.146 17:17:48 -- nvmf/common.sh@123 -- # set -e 00:25:32.146 17:17:48 -- nvmf/common.sh@124 -- # return 0 00:25:32.146 17:17:48 -- nvmf/common.sh@477 -- # '[' -n 619008 ']' 00:25:32.146 17:17:48 -- nvmf/common.sh@478 -- # killprocess 619008 00:25:32.146 17:17:48 -- common/autotest_common.sh@926 -- # '[' -z 619008 ']' 00:25:32.146 17:17:48 -- common/autotest_common.sh@930 -- # kill -0 619008 00:25:32.146 17:17:48 -- common/autotest_common.sh@931 -- # uname 00:25:32.146 17:17:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:32.146 17:17:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 619008 00:25:32.146 17:17:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:32.146 17:17:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:32.146 17:17:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 619008' 00:25:32.146 killing process with pid 619008 00:25:32.146 17:17:48 -- common/autotest_common.sh@945 -- # kill 619008 00:25:32.146 [2024-07-20 17:17:48.114201] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:32.146 17:17:48 -- common/autotest_common.sh@950 -- # wait 619008 00:25:32.404 17:17:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:32.404 17:17:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:32.404 17:17:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:32.404 17:17:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.404 17:17:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:32.404 17:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.404 17:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.404 17:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.332 17:17:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:34.332 00:25:34.332 real 0m5.993s 00:25:34.332 user 0m7.269s 00:25:34.332 sys 0m1.904s 00:25:34.332 17:17:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.332 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.332 ************************************ 00:25:34.332 END TEST nvmf_aer 00:25:34.332 ************************************ 00:25:34.332 17:17:50 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:34.332 17:17:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:34.332 17:17:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:34.332 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:34.332 ************************************ 00:25:34.332 START TEST nvmf_async_init 00:25:34.332 ************************************ 00:25:34.332 17:17:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:34.332 * Looking for test storage... 00:25:34.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.332 17:17:50 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.332 17:17:50 -- nvmf/common.sh@7 -- # uname -s 00:25:34.332 17:17:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.332 17:17:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.332 17:17:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.332 17:17:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.332 17:17:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.332 17:17:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.332 17:17:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.332 17:17:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.332 17:17:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.332 17:17:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.332 17:17:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:34.332 17:17:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:34.332 17:17:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.332 17:17:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.332 17:17:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.332 17:17:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.332 17:17:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.332 17:17:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.332 17:17:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.332 17:17:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.332 17:17:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.332 17:17:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.332 17:17:50 -- paths/export.sh@5 -- # export PATH 00:25:34.332 17:17:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.332 17:17:50 -- nvmf/common.sh@46 -- # : 0 00:25:34.332 17:17:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:34.332 17:17:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:34.332 17:17:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:34.332 17:17:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.332 17:17:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.332 17:17:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:34.332 17:17:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:34.332 17:17:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:34.332 17:17:50 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:34.333 17:17:50 -- host/async_init.sh@14 -- # null_block_size=512 00:25:34.333 17:17:50 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:34.333 17:17:50 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:34.333 17:17:50 -- host/async_init.sh@20 -- # uuidgen 00:25:34.333 17:17:50 -- host/async_init.sh@20 -- # tr -d - 00:25:34.333 17:17:50 -- host/async_init.sh@20 -- # nguid=685dfc34dfc242ccbea37d92b8de967a 00:25:34.333 17:17:50 -- host/async_init.sh@22 -- # nvmftestinit 00:25:34.333 17:17:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:34.333 17:17:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.333 17:17:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:34.333 17:17:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:34.333 17:17:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:34.333 17:17:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.333 17:17:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.333 17:17:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.333 17:17:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:34.333 17:17:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:34.333 17:17:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:34.333 17:17:50 -- common/autotest_common.sh@10 -- # set +x 00:25:36.233 17:17:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:36.233 17:17:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:36.233 17:17:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:36.233 17:17:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:36.233 17:17:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:36.233 17:17:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:36.233 17:17:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:36.233 17:17:52 -- nvmf/common.sh@294 -- # net_devs=() 00:25:36.233 17:17:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:36.233 17:17:52 -- nvmf/common.sh@295 -- # e810=() 00:25:36.233 17:17:52 -- nvmf/common.sh@295 -- # local -ga e810 00:25:36.233 17:17:52 -- nvmf/common.sh@296 -- # x722=() 00:25:36.233 17:17:52 -- nvmf/common.sh@296 -- # local -ga x722 00:25:36.233 17:17:52 -- nvmf/common.sh@297 -- # mlx=() 00:25:36.233 17:17:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:36.233 17:17:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.233 17:17:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:36.233 17:17:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:36.233 17:17:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:36.233 17:17:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.233 17:17:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:36.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:36.233 17:17:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:36.233 17:17:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:36.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:36.233 17:17:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:36.233 17:17:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.233 17:17:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.233 17:17:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.233 17:17:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.233 17:17:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:36.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:36.233 17:17:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.233 17:17:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:36.233 17:17:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.233 17:17:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:36.233 17:17:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.233 17:17:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:36.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:36.233 17:17:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.233 17:17:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:36.233 17:17:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:36.233 17:17:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:36.233 17:17:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:36.233 17:17:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.233 17:17:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.233 17:17:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.233 17:17:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:36.233 17:17:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.233 17:17:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.233 17:17:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:36.233 17:17:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.233 17:17:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.233 17:17:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:36.233 17:17:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:36.233 17:17:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.233 17:17:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.233 17:17:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.233 17:17:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.233 17:17:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:36.233 17:17:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.492 17:17:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.492 17:17:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.492 17:17:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:36.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:25:36.492 00:25:36.492 --- 10.0.0.2 ping statistics --- 00:25:36.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.492 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:25:36.492 17:17:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:25:36.492 00:25:36.492 --- 10.0.0.1 ping statistics --- 00:25:36.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.492 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:25:36.492 17:17:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.492 17:17:52 -- nvmf/common.sh@410 -- # return 0 00:25:36.492 17:17:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:36.492 17:17:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.492 17:17:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:36.492 17:17:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:36.492 17:17:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.492 17:17:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:36.492 17:17:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:36.492 17:17:52 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:36.492 17:17:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:36.492 17:17:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:36.492 17:17:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.492 17:17:52 -- nvmf/common.sh@469 -- # nvmfpid=621116 00:25:36.492 17:17:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:36.492 17:17:52 -- nvmf/common.sh@470 -- # waitforlisten 621116 00:25:36.492 17:17:52 -- common/autotest_common.sh@819 -- # '[' -z 621116 ']' 00:25:36.492 17:17:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.492 17:17:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:36.492 17:17:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.492 17:17:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:36.492 17:17:52 -- common/autotest_common.sh@10 -- # set +x 00:25:36.492 [2024-07-20 17:17:52.510298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:36.492 [2024-07-20 17:17:52.510387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.492 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.492 [2024-07-20 17:17:52.586866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.750 [2024-07-20 17:17:52.680716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:36.750 [2024-07-20 17:17:52.680899] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.750 [2024-07-20 17:17:52.680922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.750 [2024-07-20 17:17:52.680938] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.750 [2024-07-20 17:17:52.680968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.679 17:17:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:37.679 17:17:53 -- common/autotest_common.sh@852 -- # return 0 00:25:37.679 17:17:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:37.679 17:17:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 17:17:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.679 17:17:53 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 [2024-07-20 17:17:53.544681] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 null0 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 685dfc34dfc242ccbea37d92b8de967a 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 [2024-07-20 17:17:53.584940] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 nvme0n1 00:25:37.679 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.679 17:17:53 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:37.679 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.679 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.679 [ 00:25:37.679 { 00:25:37.680 "name": "nvme0n1", 00:25:37.680 "aliases": [ 00:25:37.680 "685dfc34-dfc2-42cc-bea3-7d92b8de967a" 00:25:37.680 ], 00:25:37.680 "product_name": "NVMe disk", 00:25:37.680 "block_size": 512, 00:25:37.680 "num_blocks": 2097152, 00:25:37.680 "uuid": "685dfc34-dfc2-42cc-bea3-7d92b8de967a", 00:25:37.680 "assigned_rate_limits": { 00:25:37.680 "rw_ios_per_sec": 0, 00:25:37.680 "rw_mbytes_per_sec": 0, 00:25:37.680 "r_mbytes_per_sec": 0, 00:25:37.680 "w_mbytes_per_sec": 0 00:25:37.680 }, 00:25:37.680 "claimed": false, 00:25:37.680 "zoned": false, 00:25:37.680 "supported_io_types": { 00:25:37.680 "read": true, 00:25:37.680 "write": true, 00:25:37.680 "unmap": false, 00:25:37.680 "write_zeroes": true, 00:25:37.680 "flush": true, 00:25:37.680 "reset": true, 00:25:37.680 "compare": true, 00:25:37.680 "compare_and_write": true, 00:25:37.680 "abort": true, 00:25:37.680 "nvme_admin": true, 00:25:37.680 "nvme_io": true 00:25:37.680 }, 00:25:37.680 "driver_specific": { 00:25:37.680 "nvme": [ 00:25:37.680 { 00:25:37.680 "trid": { 00:25:37.680 "trtype": "TCP", 00:25:37.680 "adrfam": "IPv4", 00:25:37.680 "traddr": "10.0.0.2", 00:25:37.680 "trsvcid": "4420", 00:25:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:37.680 }, 00:25:37.680 "ctrlr_data": { 00:25:37.680 "cntlid": 1, 00:25:37.680 "vendor_id": "0x8086", 00:25:37.680 "model_number": "SPDK bdev Controller", 00:25:37.680 "serial_number": "00000000000000000000", 00:25:37.680 "firmware_revision": "24.01.1", 00:25:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.680 "oacs": { 00:25:37.680 "security": 0, 00:25:37.680 "format": 0, 00:25:37.680 "firmware": 0, 00:25:37.680 "ns_manage": 0 00:25:37.680 }, 00:25:37.680 "multi_ctrlr": true, 00:25:37.680 "ana_reporting": false 00:25:37.680 }, 00:25:37.680 "vs": { 00:25:37.680 "nvme_version": "1.3" 00:25:37.680 }, 00:25:37.680 "ns_data": { 00:25:37.680 "id": 1, 00:25:37.680 "can_share": true 00:25:37.680 } 00:25:37.680 } 00:25:37.680 ], 00:25:37.680 "mp_policy": "active_passive" 00:25:37.680 } 00:25:37.680 } 00:25:37.680 ] 00:25:37.680 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.680 17:17:53 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:37.680 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.680 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.680 [2024-07-20 17:17:53.833615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:37.680 [2024-07-20 17:17:53.833703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x111c0b0 (9): Bad file descriptor 00:25:37.936 [2024-07-20 17:17:53.965943] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.936 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.936 17:17:53 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:37.936 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.936 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.936 [ 00:25:37.936 { 00:25:37.936 "name": "nvme0n1", 00:25:37.936 "aliases": [ 00:25:37.936 "685dfc34-dfc2-42cc-bea3-7d92b8de967a" 00:25:37.936 ], 00:25:37.936 "product_name": "NVMe disk", 00:25:37.936 "block_size": 512, 00:25:37.936 "num_blocks": 2097152, 00:25:37.936 "uuid": "685dfc34-dfc2-42cc-bea3-7d92b8de967a", 00:25:37.936 "assigned_rate_limits": { 00:25:37.936 "rw_ios_per_sec": 0, 00:25:37.936 "rw_mbytes_per_sec": 0, 00:25:37.936 "r_mbytes_per_sec": 0, 00:25:37.936 "w_mbytes_per_sec": 0 00:25:37.937 }, 00:25:37.937 "claimed": false, 00:25:37.937 "zoned": false, 00:25:37.937 "supported_io_types": { 00:25:37.937 "read": true, 00:25:37.937 "write": true, 00:25:37.937 "unmap": false, 00:25:37.937 "write_zeroes": true, 00:25:37.937 "flush": true, 00:25:37.937 "reset": true, 00:25:37.937 "compare": true, 00:25:37.937 "compare_and_write": true, 00:25:37.937 "abort": true, 00:25:37.937 "nvme_admin": true, 00:25:37.937 "nvme_io": true 00:25:37.937 }, 00:25:37.937 "driver_specific": { 00:25:37.937 "nvme": [ 00:25:37.937 { 00:25:37.937 "trid": { 00:25:37.937 "trtype": "TCP", 00:25:37.937 "adrfam": "IPv4", 00:25:37.937 "traddr": "10.0.0.2", 00:25:37.937 "trsvcid": "4420", 00:25:37.937 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:37.937 }, 00:25:37.937 "ctrlr_data": { 00:25:37.937 "cntlid": 2, 00:25:37.937 "vendor_id": "0x8086", 00:25:37.937 "model_number": "SPDK bdev Controller", 00:25:37.937 "serial_number": "00000000000000000000", 00:25:37.937 "firmware_revision": "24.01.1", 00:25:37.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:37.937 "oacs": { 00:25:37.937 "security": 0, 00:25:37.937 "format": 0, 00:25:37.937 "firmware": 0, 00:25:37.937 "ns_manage": 0 00:25:37.937 }, 00:25:37.937 "multi_ctrlr": true, 00:25:37.937 "ana_reporting": false 00:25:37.937 }, 00:25:37.937 "vs": { 00:25:37.937 "nvme_version": "1.3" 00:25:37.937 }, 00:25:37.937 "ns_data": { 00:25:37.937 "id": 1, 00:25:37.937 "can_share": true 00:25:37.937 } 00:25:37.937 } 00:25:37.937 ], 00:25:37.937 "mp_policy": "active_passive" 00:25:37.937 } 00:25:37.937 } 00:25:37.937 ] 00:25:37.937 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.937 17:17:53 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.937 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.937 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.937 17:17:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.937 17:17:53 -- host/async_init.sh@53 -- # mktemp 00:25:37.937 17:17:53 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ALNUk6ICGE 00:25:37.937 17:17:53 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:37.937 17:17:53 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ALNUk6ICGE 00:25:37.937 17:17:53 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:37.937 17:17:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.937 17:17:53 -- common/autotest_common.sh@10 -- # set +x 00:25:37.937 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.937 17:17:54 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:37.937 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.937 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.937 [2024-07-20 17:17:54.010237] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:37.937 [2024-07-20 17:17:54.010381] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:37.937 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.937 17:17:54 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ALNUk6ICGE 00:25:37.937 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.937 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.937 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.937 17:17:54 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ALNUk6ICGE 00:25:37.937 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.937 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:37.937 [2024-07-20 17:17:54.026273] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:37.937 nvme0n1 00:25:37.937 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.937 17:17:54 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:37.937 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.937 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.194 [ 00:25:38.194 { 00:25:38.194 "name": "nvme0n1", 00:25:38.194 "aliases": [ 00:25:38.194 "685dfc34-dfc2-42cc-bea3-7d92b8de967a" 00:25:38.194 ], 00:25:38.194 "product_name": "NVMe disk", 00:25:38.194 "block_size": 512, 00:25:38.194 "num_blocks": 2097152, 00:25:38.194 "uuid": "685dfc34-dfc2-42cc-bea3-7d92b8de967a", 00:25:38.194 "assigned_rate_limits": { 00:25:38.194 "rw_ios_per_sec": 0, 00:25:38.194 "rw_mbytes_per_sec": 0, 00:25:38.194 "r_mbytes_per_sec": 0, 00:25:38.194 "w_mbytes_per_sec": 0 00:25:38.194 }, 00:25:38.194 "claimed": false, 00:25:38.194 "zoned": false, 00:25:38.194 "supported_io_types": { 00:25:38.194 "read": true, 00:25:38.194 "write": true, 00:25:38.194 "unmap": false, 00:25:38.194 "write_zeroes": true, 00:25:38.194 "flush": true, 00:25:38.194 "reset": true, 00:25:38.194 "compare": true, 00:25:38.194 "compare_and_write": true, 00:25:38.194 "abort": true, 00:25:38.194 "nvme_admin": true, 00:25:38.194 "nvme_io": true 00:25:38.194 }, 00:25:38.194 "driver_specific": { 00:25:38.194 "nvme": [ 00:25:38.194 { 00:25:38.194 "trid": { 00:25:38.194 "trtype": "TCP", 00:25:38.194 "adrfam": "IPv4", 00:25:38.194 "traddr": "10.0.0.2", 00:25:38.194 "trsvcid": "4421", 00:25:38.194 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:38.194 }, 00:25:38.194 "ctrlr_data": { 00:25:38.194 "cntlid": 3, 00:25:38.194 "vendor_id": "0x8086", 00:25:38.194 "model_number": "SPDK bdev Controller", 00:25:38.194 "serial_number": "00000000000000000000", 00:25:38.194 "firmware_revision": "24.01.1", 00:25:38.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:38.194 "oacs": { 00:25:38.194 "security": 0, 00:25:38.194 "format": 0, 00:25:38.194 "firmware": 0, 00:25:38.194 "ns_manage": 0 00:25:38.194 }, 00:25:38.194 "multi_ctrlr": true, 00:25:38.194 "ana_reporting": false 00:25:38.194 }, 00:25:38.194 "vs": { 00:25:38.194 "nvme_version": "1.3" 00:25:38.194 }, 00:25:38.194 "ns_data": { 00:25:38.194 "id": 1, 00:25:38.194 "can_share": true 00:25:38.194 } 00:25:38.194 } 00:25:38.194 ], 00:25:38.194 "mp_policy": "active_passive" 00:25:38.194 } 00:25:38.194 } 00:25:38.194 ] 00:25:38.194 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.194 17:17:54 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.194 17:17:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.194 17:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:38.194 17:17:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:38.194 17:17:54 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ALNUk6ICGE 00:25:38.194 17:17:54 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:38.194 17:17:54 -- host/async_init.sh@78 -- # nvmftestfini 00:25:38.194 17:17:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:38.194 17:17:54 -- nvmf/common.sh@116 -- # sync 00:25:38.194 17:17:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:38.194 17:17:54 -- nvmf/common.sh@119 -- # set +e 00:25:38.194 17:17:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:38.194 17:17:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:38.195 rmmod nvme_tcp 00:25:38.195 rmmod nvme_fabrics 00:25:38.195 rmmod nvme_keyring 00:25:38.195 17:17:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:38.195 17:17:54 -- nvmf/common.sh@123 -- # set -e 00:25:38.195 17:17:54 -- nvmf/common.sh@124 -- # return 0 00:25:38.195 17:17:54 -- nvmf/common.sh@477 -- # '[' -n 621116 ']' 00:25:38.195 17:17:54 -- nvmf/common.sh@478 -- # killprocess 621116 00:25:38.195 17:17:54 -- common/autotest_common.sh@926 -- # '[' -z 621116 ']' 00:25:38.195 17:17:54 -- common/autotest_common.sh@930 -- # kill -0 621116 00:25:38.195 17:17:54 -- common/autotest_common.sh@931 -- # uname 00:25:38.195 17:17:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:38.195 17:17:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 621116 00:25:38.195 17:17:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:38.195 17:17:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:38.195 17:17:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 621116' 00:25:38.195 killing process with pid 621116 00:25:38.195 17:17:54 -- common/autotest_common.sh@945 -- # kill 621116 00:25:38.195 17:17:54 -- common/autotest_common.sh@950 -- # wait 621116 00:25:38.452 17:17:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:38.452 17:17:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:38.452 17:17:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:38.452 17:17:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.452 17:17:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:38.452 17:17:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.452 17:17:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.452 17:17:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.362 17:17:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:40.362 00:25:40.362 real 0m6.055s 00:25:40.362 user 0m2.919s 00:25:40.362 sys 0m1.794s 00:25:40.362 17:17:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.362 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.362 ************************************ 00:25:40.362 END TEST nvmf_async_init 00:25:40.362 ************************************ 00:25:40.362 17:17:56 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:40.362 17:17:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:40.362 17:17:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.362 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.362 ************************************ 00:25:40.362 START TEST dma 00:25:40.362 ************************************ 00:25:40.362 17:17:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:40.621 * Looking for test storage... 00:25:40.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.621 17:17:56 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.621 17:17:56 -- nvmf/common.sh@7 -- # uname -s 00:25:40.621 17:17:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.621 17:17:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.621 17:17:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.621 17:17:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.621 17:17:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.621 17:17:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.621 17:17:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.621 17:17:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.621 17:17:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.621 17:17:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.621 17:17:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.621 17:17:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.621 17:17:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.621 17:17:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.621 17:17:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.621 17:17:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.621 17:17:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.621 17:17:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.621 17:17:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.621 17:17:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.621 17:17:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.621 17:17:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.621 17:17:56 -- paths/export.sh@5 -- # export PATH 00:25:40.621 17:17:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.621 17:17:56 -- nvmf/common.sh@46 -- # : 0 00:25:40.621 17:17:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:40.621 17:17:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:40.621 17:17:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:40.621 17:17:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.621 17:17:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.621 17:17:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:40.621 17:17:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:40.621 17:17:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:40.621 17:17:56 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:40.621 17:17:56 -- host/dma.sh@13 -- # exit 0 00:25:40.621 00:25:40.621 real 0m0.066s 00:25:40.621 user 0m0.031s 00:25:40.621 sys 0m0.041s 00:25:40.621 17:17:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.621 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.621 ************************************ 00:25:40.621 END TEST dma 00:25:40.621 ************************************ 00:25:40.621 17:17:56 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:40.621 17:17:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:40.621 17:17:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.621 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:25:40.621 ************************************ 00:25:40.621 START TEST nvmf_identify 00:25:40.621 ************************************ 00:25:40.621 17:17:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:40.621 * Looking for test storage... 00:25:40.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.621 17:17:56 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.621 17:17:56 -- nvmf/common.sh@7 -- # uname -s 00:25:40.621 17:17:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.621 17:17:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.621 17:17:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.621 17:17:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.622 17:17:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.622 17:17:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.622 17:17:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.622 17:17:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.622 17:17:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.622 17:17:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.622 17:17:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.622 17:17:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:40.622 17:17:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.622 17:17:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.622 17:17:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.622 17:17:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.622 17:17:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.622 17:17:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.622 17:17:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.622 17:17:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.622 17:17:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.622 17:17:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.622 17:17:56 -- paths/export.sh@5 -- # export PATH 00:25:40.622 17:17:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.622 17:17:56 -- nvmf/common.sh@46 -- # : 0 00:25:40.622 17:17:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:40.622 17:17:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:40.622 17:17:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:40.622 17:17:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.622 17:17:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.622 17:17:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:40.622 17:17:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:40.622 17:17:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:40.622 17:17:56 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:40.622 17:17:56 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:40.622 17:17:56 -- host/identify.sh@14 -- # nvmftestinit 00:25:40.622 17:17:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.622 17:17:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.622 17:17:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.622 17:17:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.622 17:17:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.622 17:17:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.622 17:17:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.622 17:17:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.622 17:17:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:40.622 17:17:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:40.622 17:17:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:40.622 17:17:56 -- common/autotest_common.sh@10 -- # set +x 00:25:42.522 17:17:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:42.522 17:17:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:42.522 17:17:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:42.522 17:17:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:42.522 17:17:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:42.522 17:17:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:42.522 17:17:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:42.522 17:17:58 -- nvmf/common.sh@294 -- # net_devs=() 00:25:42.522 17:17:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:42.522 17:17:58 -- nvmf/common.sh@295 -- # e810=() 00:25:42.522 17:17:58 -- nvmf/common.sh@295 -- # local -ga e810 00:25:42.522 17:17:58 -- nvmf/common.sh@296 -- # x722=() 00:25:42.522 17:17:58 -- nvmf/common.sh@296 -- # local -ga x722 00:25:42.522 17:17:58 -- nvmf/common.sh@297 -- # mlx=() 00:25:42.522 17:17:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:42.522 17:17:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.522 17:17:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:42.523 17:17:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:42.523 17:17:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:42.523 17:17:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:42.523 17:17:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:42.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:42.523 17:17:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:42.523 17:17:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:42.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:42.523 17:17:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:42.523 17:17:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:42.523 17:17:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.523 17:17:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:42.523 17:17:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.523 17:17:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:42.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:42.523 17:17:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.523 17:17:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:42.523 17:17:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.523 17:17:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:42.523 17:17:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.523 17:17:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:42.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:42.523 17:17:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.523 17:17:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:42.523 17:17:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:42.523 17:17:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:42.523 17:17:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:42.523 17:17:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.523 17:17:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.523 17:17:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.523 17:17:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:42.523 17:17:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.523 17:17:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.523 17:17:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:42.523 17:17:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.523 17:17:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.523 17:17:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:42.523 17:17:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:42.523 17:17:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.523 17:17:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.523 17:17:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.781 17:17:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.781 17:17:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:42.781 17:17:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.781 17:17:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.781 17:17:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.781 17:17:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:42.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:25:42.781 00:25:42.781 --- 10.0.0.2 ping statistics --- 00:25:42.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.781 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:42.782 17:17:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:25:42.782 00:25:42.782 --- 10.0.0.1 ping statistics --- 00:25:42.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.782 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:25:42.782 17:17:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.782 17:17:58 -- nvmf/common.sh@410 -- # return 0 00:25:42.782 17:17:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:42.782 17:17:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.782 17:17:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:42.782 17:17:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:42.782 17:17:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.782 17:17:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:42.782 17:17:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:42.782 17:17:58 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:42.782 17:17:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:42.782 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.782 17:17:58 -- host/identify.sh@19 -- # nvmfpid=623378 00:25:42.782 17:17:58 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:42.782 17:17:58 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.782 17:17:58 -- host/identify.sh@23 -- # waitforlisten 623378 00:25:42.782 17:17:58 -- common/autotest_common.sh@819 -- # '[' -z 623378 ']' 00:25:42.782 17:17:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.782 17:17:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:42.782 17:17:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.782 17:17:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:42.782 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:25:42.782 [2024-07-20 17:17:58.823496] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:42.782 [2024-07-20 17:17:58.823596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.782 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.782 [2024-07-20 17:17:58.894316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.040 [2024-07-20 17:17:58.987425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:43.040 [2024-07-20 17:17:58.987583] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.040 [2024-07-20 17:17:58.987601] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.040 [2024-07-20 17:17:58.987614] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.040 [2024-07-20 17:17:58.987667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.040 [2024-07-20 17:17:58.987701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.040 [2024-07-20 17:17:58.987722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.040 [2024-07-20 17:17:58.987724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.604 17:17:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:43.604 17:17:59 -- common/autotest_common.sh@852 -- # return 0 00:25:43.604 17:17:59 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:43.604 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.604 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.604 [2024-07-20 17:17:59.760362] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:43.863 17:17:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 17:17:59 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:43.863 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 Malloc0 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.863 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:43.863 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.863 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 [2024-07-20 17:17:59.837478] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:43.863 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:43.863 17:17:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.863 17:17:59 -- common/autotest_common.sh@10 -- # set +x 00:25:43.863 [2024-07-20 17:17:59.853255] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:43.863 [ 00:25:43.863 { 00:25:43.863 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:43.863 "subtype": "Discovery", 00:25:43.863 "listen_addresses": [ 00:25:43.863 { 00:25:43.863 "transport": "TCP", 00:25:43.863 "trtype": "TCP", 00:25:43.863 "adrfam": "IPv4", 00:25:43.863 "traddr": "10.0.0.2", 00:25:43.863 "trsvcid": "4420" 00:25:43.863 } 00:25:43.863 ], 00:25:43.863 "allow_any_host": true, 00:25:43.863 "hosts": [] 00:25:43.863 }, 00:25:43.863 { 00:25:43.863 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:43.863 "subtype": "NVMe", 00:25:43.863 "listen_addresses": [ 00:25:43.863 { 00:25:43.863 "transport": "TCP", 00:25:43.863 "trtype": "TCP", 00:25:43.863 "adrfam": "IPv4", 00:25:43.863 "traddr": "10.0.0.2", 00:25:43.863 "trsvcid": "4420" 00:25:43.863 } 00:25:43.863 ], 00:25:43.863 "allow_any_host": true, 00:25:43.863 "hosts": [], 00:25:43.863 "serial_number": "SPDK00000000000001", 00:25:43.863 "model_number": "SPDK bdev Controller", 00:25:43.863 "max_namespaces": 32, 00:25:43.863 "min_cntlid": 1, 00:25:43.863 "max_cntlid": 65519, 00:25:43.863 "namespaces": [ 00:25:43.863 { 00:25:43.863 "nsid": 1, 00:25:43.863 "bdev_name": "Malloc0", 00:25:43.863 "name": "Malloc0", 00:25:43.863 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:43.863 "eui64": "ABCDEF0123456789", 00:25:43.863 "uuid": "bb4d8b45-6867-40f1-bc77-be90389da99d" 00:25:43.863 } 00:25:43.863 ] 00:25:43.863 } 00:25:43.863 ] 00:25:43.863 17:17:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.863 17:17:59 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:43.863 [2024-07-20 17:17:59.876154] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:43.863 [2024-07-20 17:17:59.876199] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623535 ] 00:25:43.863 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.863 [2024-07-20 17:17:59.910094] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:43.863 [2024-07-20 17:17:59.910166] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:43.863 [2024-07-20 17:17:59.910176] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:43.863 [2024-07-20 17:17:59.910191] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:43.863 [2024-07-20 17:17:59.910204] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:43.863 [2024-07-20 17:17:59.911867] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:43.863 [2024-07-20 17:17:59.911934] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19c55a0 0 00:25:43.863 [2024-07-20 17:17:59.921808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:43.863 [2024-07-20 17:17:59.921842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:43.863 [2024-07-20 17:17:59.921855] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:43.863 [2024-07-20 17:17:59.921862] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:43.863 [2024-07-20 17:17:59.921913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.921926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.921934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.863 [2024-07-20 17:17:59.921953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:43.863 [2024-07-20 17:17:59.921980] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.863 [2024-07-20 17:17:59.929810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.863 [2024-07-20 17:17:59.929828] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.863 [2024-07-20 17:17:59.929846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.929853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.863 [2024-07-20 17:17:59.929873] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:43.863 [2024-07-20 17:17:59.929884] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:43.863 [2024-07-20 17:17:59.929894] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:43.863 [2024-07-20 17:17:59.929913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.929921] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.929928] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.863 [2024-07-20 17:17:59.929939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.863 [2024-07-20 17:17:59.929961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.863 [2024-07-20 17:17:59.930214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.863 [2024-07-20 17:17:59.930230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.863 [2024-07-20 17:17:59.930238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.930245] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.863 [2024-07-20 17:17:59.930256] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:43.863 [2024-07-20 17:17:59.930270] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:43.863 [2024-07-20 17:17:59.930283] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.930290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.930297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.863 [2024-07-20 17:17:59.930308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.863 [2024-07-20 17:17:59.930329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.863 [2024-07-20 17:17:59.930551] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.863 [2024-07-20 17:17:59.930565] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.863 [2024-07-20 17:17:59.930572] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.930579] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.863 [2024-07-20 17:17:59.930589] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:43.863 [2024-07-20 17:17:59.930609] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:43.863 [2024-07-20 17:17:59.930622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.930630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.863 [2024-07-20 17:17:59.930636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.863 [2024-07-20 17:17:59.930647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.863 [2024-07-20 17:17:59.930668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.863 [2024-07-20 17:17:59.930886] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.864 [2024-07-20 17:17:59.930903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.864 [2024-07-20 17:17:59.930910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.930917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.864 [2024-07-20 17:17:59.930928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:43.864 [2024-07-20 17:17:59.930945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.930954] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.930961] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.930972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.864 [2024-07-20 17:17:59.930993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.864 [2024-07-20 17:17:59.931204] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.864 [2024-07-20 17:17:59.931220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.864 [2024-07-20 17:17:59.931228] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.931235] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.864 [2024-07-20 17:17:59.931245] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:43.864 [2024-07-20 17:17:59.931254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:43.864 [2024-07-20 17:17:59.931267] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:43.864 [2024-07-20 17:17:59.931384] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:43.864 [2024-07-20 17:17:59.931392] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:43.864 [2024-07-20 17:17:59.931409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.931417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.931423] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.931434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.864 [2024-07-20 17:17:59.931456] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.864 [2024-07-20 17:17:59.931670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.864 [2024-07-20 17:17:59.931686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.864 [2024-07-20 17:17:59.931694] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.931704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.864 [2024-07-20 17:17:59.931716] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:43.864 [2024-07-20 17:17:59.931733] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.931742] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.931748] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.931759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.864 [2024-07-20 17:17:59.931780] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.864 [2024-07-20 17:17:59.931999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.864 [2024-07-20 17:17:59.932015] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.864 [2024-07-20 17:17:59.932023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.932030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.864 [2024-07-20 17:17:59.932039] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:43.864 [2024-07-20 17:17:59.932048] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:43.864 [2024-07-20 17:17:59.932062] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:43.864 [2024-07-20 17:17:59.932082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:43.864 [2024-07-20 17:17:59.932097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.932105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.932112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.932123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.864 [2024-07-20 17:17:59.932144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.864 [2024-07-20 17:17:59.932408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:43.864 [2024-07-20 17:17:59.932427] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:43.864 [2024-07-20 17:17:59.932437] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.932444] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c55a0): datao=0, datal=4096, cccid=0 00:25:43.864 [2024-07-20 17:17:59.932452] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a303e0) on tqpair(0x19c55a0): expected_datao=0, payload_size=4096 00:25:43.864 [2024-07-20 17:17:59.932531] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.932542] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.864 [2024-07-20 17:17:59.973042] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.864 [2024-07-20 17:17:59.973050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.864 [2024-07-20 17:17:59.973073] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:43.864 [2024-07-20 17:17:59.973082] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:43.864 [2024-07-20 17:17:59.973095] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:43.864 [2024-07-20 17:17:59.973110] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:43.864 [2024-07-20 17:17:59.973118] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:43.864 [2024-07-20 17:17:59.973127] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:43.864 [2024-07-20 17:17:59.973148] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:43.864 [2024-07-20 17:17:59.973164] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973171] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.973189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:43.864 [2024-07-20 17:17:59.973213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.864 [2024-07-20 17:17:59.973428] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.864 [2024-07-20 17:17:59.973445] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.864 [2024-07-20 17:17:59.973452] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a303e0) on tqpair=0x19c55a0 00:25:43.864 [2024-07-20 17:17:59.973474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973489] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.973499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.864 [2024-07-20 17:17:59.973509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.973532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.864 [2024-07-20 17:17:59.973541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973548] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.973563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.864 [2024-07-20 17:17:59.973573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.973611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.864 [2024-07-20 17:17:59.973620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:43.864 [2024-07-20 17:17:59.973639] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:43.864 [2024-07-20 17:17:59.973654] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973662] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.864 [2024-07-20 17:17:59.973669] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c55a0) 00:25:43.864 [2024-07-20 17:17:59.973679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.864 [2024-07-20 17:17:59.973702] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a303e0, cid 0, qid 0 00:25:43.864 [2024-07-20 17:17:59.973733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30540, cid 1, qid 0 00:25:43.864 [2024-07-20 17:17:59.973745] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a306a0, cid 2, qid 0 00:25:43.865 [2024-07-20 17:17:59.973753] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30800, cid 3, qid 0 00:25:43.865 [2024-07-20 17:17:59.973761] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30960, cid 4, qid 0 00:25:43.865 [2024-07-20 17:17:59.975887] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.865 [2024-07-20 17:17:59.975904] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.865 [2024-07-20 17:17:59.975912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.975919] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30960) on tqpair=0x19c55a0 00:25:43.865 [2024-07-20 17:17:59.975929] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:43.865 [2024-07-20 17:17:59.975938] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:43.865 [2024-07-20 17:17:59.975955] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.975965] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.975971] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c55a0) 00:25:43.865 [2024-07-20 17:17:59.975982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.865 [2024-07-20 17:17:59.976003] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30960, cid 4, qid 0 00:25:43.865 [2024-07-20 17:17:59.976396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:43.865 [2024-07-20 17:17:59.976417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:43.865 [2024-07-20 17:17:59.976425] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976432] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c55a0): datao=0, datal=4096, cccid=4 00:25:43.865 [2024-07-20 17:17:59.976439] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30960) on tqpair(0x19c55a0): expected_datao=0, payload_size=4096 00:25:43.865 [2024-07-20 17:17:59.976466] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976474] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.865 [2024-07-20 17:17:59.976623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.865 [2024-07-20 17:17:59.976630] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976637] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30960) on tqpair=0x19c55a0 00:25:43.865 [2024-07-20 17:17:59.976657] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:43.865 [2024-07-20 17:17:59.976698] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976709] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976716] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c55a0) 00:25:43.865 [2024-07-20 17:17:59.976730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.865 [2024-07-20 17:17:59.976744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976766] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.976772] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19c55a0) 00:25:43.865 [2024-07-20 17:17:59.976782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:43.865 [2024-07-20 17:17:59.976831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30960, cid 4, qid 0 00:25:43.865 [2024-07-20 17:17:59.976855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30ac0, cid 5, qid 0 00:25:43.865 [2024-07-20 17:17:59.977115] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:43.865 [2024-07-20 17:17:59.977130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:43.865 [2024-07-20 17:17:59.977138] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.977144] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c55a0): datao=0, datal=1024, cccid=4 00:25:43.865 [2024-07-20 17:17:59.977152] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30960) on tqpair(0x19c55a0): expected_datao=0, payload_size=1024 00:25:43.865 [2024-07-20 17:17:59.977163] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.977170] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.977179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.865 [2024-07-20 17:17:59.977188] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.865 [2024-07-20 17:17:59.977195] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:17:59.977202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30ac0) on tqpair=0x19c55a0 00:25:43.865 [2024-07-20 17:18:00.018073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.865 [2024-07-20 17:18:00.018109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.865 [2024-07-20 17:18:00.018118] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018126] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30960) on tqpair=0x19c55a0 00:25:43.865 [2024-07-20 17:18:00.018152] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018170] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c55a0) 00:25:43.865 [2024-07-20 17:18:00.018185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.865 [2024-07-20 17:18:00.018217] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30960, cid 4, qid 0 00:25:43.865 [2024-07-20 17:18:00.018574] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:43.865 [2024-07-20 17:18:00.018594] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:43.865 [2024-07-20 17:18:00.018603] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018610] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c55a0): datao=0, datal=3072, cccid=4 00:25:43.865 [2024-07-20 17:18:00.018619] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30960) on tqpair(0x19c55a0): expected_datao=0, payload_size=3072 00:25:43.865 [2024-07-20 17:18:00.018631] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018640] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:43.865 [2024-07-20 17:18:00.018800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:43.865 [2024-07-20 17:18:00.018818] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018826] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30960) on tqpair=0x19c55a0 00:25:43.865 [2024-07-20 17:18:00.018843] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018852] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.018859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19c55a0) 00:25:43.865 [2024-07-20 17:18:00.018871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.865 [2024-07-20 17:18:00.018902] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30960, cid 4, qid 0 00:25:43.865 [2024-07-20 17:18:00.019160] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:43.865 [2024-07-20 17:18:00.019178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:43.865 [2024-07-20 17:18:00.019186] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.019193] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19c55a0): datao=0, datal=8, cccid=4 00:25:43.865 [2024-07-20 17:18:00.019201] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a30960) on tqpair(0x19c55a0): expected_datao=0, payload_size=8 00:25:43.865 [2024-07-20 17:18:00.019213] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:43.865 [2024-07-20 17:18:00.019221] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.123 [2024-07-20 17:18:00.060076] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.123 [2024-07-20 17:18:00.060118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.123 [2024-07-20 17:18:00.060128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.123 [2024-07-20 17:18:00.060136] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30960) on tqpair=0x19c55a0 00:25:44.123 ===================================================== 00:25:44.123 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:44.123 ===================================================== 00:25:44.123 Controller Capabilities/Features 00:25:44.123 ================================ 00:25:44.123 Vendor ID: 0000 00:25:44.123 Subsystem Vendor ID: 0000 00:25:44.123 Serial Number: .................... 00:25:44.123 Model Number: ........................................ 00:25:44.123 Firmware Version: 24.01.1 00:25:44.123 Recommended Arb Burst: 0 00:25:44.123 IEEE OUI Identifier: 00 00 00 00:25:44.123 Multi-path I/O 00:25:44.123 May have multiple subsystem ports: No 00:25:44.123 May have multiple controllers: No 00:25:44.123 Associated with SR-IOV VF: No 00:25:44.124 Max Data Transfer Size: 131072 00:25:44.124 Max Number of Namespaces: 0 00:25:44.124 Max Number of I/O Queues: 1024 00:25:44.124 NVMe Specification Version (VS): 1.3 00:25:44.124 NVMe Specification Version (Identify): 1.3 00:25:44.124 Maximum Queue Entries: 128 00:25:44.124 Contiguous Queues Required: Yes 00:25:44.124 Arbitration Mechanisms Supported 00:25:44.124 Weighted Round Robin: Not Supported 00:25:44.124 Vendor Specific: Not Supported 00:25:44.124 Reset Timeout: 15000 ms 00:25:44.124 Doorbell Stride: 4 bytes 00:25:44.124 NVM Subsystem Reset: Not Supported 00:25:44.124 Command Sets Supported 00:25:44.124 NVM Command Set: Supported 00:25:44.124 Boot Partition: Not Supported 00:25:44.124 Memory Page Size Minimum: 4096 bytes 00:25:44.124 Memory Page Size Maximum: 4096 bytes 00:25:44.124 Persistent Memory Region: Not Supported 00:25:44.124 Optional Asynchronous Events Supported 00:25:44.124 Namespace Attribute Notices: Not Supported 00:25:44.124 Firmware Activation Notices: Not Supported 00:25:44.124 ANA Change Notices: Not Supported 00:25:44.124 PLE Aggregate Log Change Notices: Not Supported 00:25:44.124 LBA Status Info Alert Notices: Not Supported 00:25:44.124 EGE Aggregate Log Change Notices: Not Supported 00:25:44.124 Normal NVM Subsystem Shutdown event: Not Supported 00:25:44.124 Zone Descriptor Change Notices: Not Supported 00:25:44.124 Discovery Log Change Notices: Supported 00:25:44.124 Controller Attributes 00:25:44.124 128-bit Host Identifier: Not Supported 00:25:44.124 Non-Operational Permissive Mode: Not Supported 00:25:44.124 NVM Sets: Not Supported 00:25:44.124 Read Recovery Levels: Not Supported 00:25:44.124 Endurance Groups: Not Supported 00:25:44.124 Predictable Latency Mode: Not Supported 00:25:44.124 Traffic Based Keep ALive: Not Supported 00:25:44.124 Namespace Granularity: Not Supported 00:25:44.124 SQ Associations: Not Supported 00:25:44.124 UUID List: Not Supported 00:25:44.124 Multi-Domain Subsystem: Not Supported 00:25:44.124 Fixed Capacity Management: Not Supported 00:25:44.124 Variable Capacity Management: Not Supported 00:25:44.124 Delete Endurance Group: Not Supported 00:25:44.124 Delete NVM Set: Not Supported 00:25:44.124 Extended LBA Formats Supported: Not Supported 00:25:44.124 Flexible Data Placement Supported: Not Supported 00:25:44.124 00:25:44.124 Controller Memory Buffer Support 00:25:44.124 ================================ 00:25:44.124 Supported: No 00:25:44.124 00:25:44.124 Persistent Memory Region Support 00:25:44.124 ================================ 00:25:44.124 Supported: No 00:25:44.124 00:25:44.124 Admin Command Set Attributes 00:25:44.124 ============================ 00:25:44.124 Security Send/Receive: Not Supported 00:25:44.124 Format NVM: Not Supported 00:25:44.124 Firmware Activate/Download: Not Supported 00:25:44.124 Namespace Management: Not Supported 00:25:44.124 Device Self-Test: Not Supported 00:25:44.124 Directives: Not Supported 00:25:44.124 NVMe-MI: Not Supported 00:25:44.124 Virtualization Management: Not Supported 00:25:44.124 Doorbell Buffer Config: Not Supported 00:25:44.124 Get LBA Status Capability: Not Supported 00:25:44.124 Command & Feature Lockdown Capability: Not Supported 00:25:44.124 Abort Command Limit: 1 00:25:44.124 Async Event Request Limit: 4 00:25:44.124 Number of Firmware Slots: N/A 00:25:44.124 Firmware Slot 1 Read-Only: N/A 00:25:44.124 Firmware Activation Without Reset: N/A 00:25:44.124 Multiple Update Detection Support: N/A 00:25:44.124 Firmware Update Granularity: No Information Provided 00:25:44.124 Per-Namespace SMART Log: No 00:25:44.124 Asymmetric Namespace Access Log Page: Not Supported 00:25:44.124 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:44.124 Command Effects Log Page: Not Supported 00:25:44.124 Get Log Page Extended Data: Supported 00:25:44.124 Telemetry Log Pages: Not Supported 00:25:44.124 Persistent Event Log Pages: Not Supported 00:25:44.124 Supported Log Pages Log Page: May Support 00:25:44.124 Commands Supported & Effects Log Page: Not Supported 00:25:44.124 Feature Identifiers & Effects Log Page:May Support 00:25:44.124 NVMe-MI Commands & Effects Log Page: May Support 00:25:44.124 Data Area 4 for Telemetry Log: Not Supported 00:25:44.124 Error Log Page Entries Supported: 128 00:25:44.124 Keep Alive: Not Supported 00:25:44.124 00:25:44.124 NVM Command Set Attributes 00:25:44.124 ========================== 00:25:44.124 Submission Queue Entry Size 00:25:44.124 Max: 1 00:25:44.124 Min: 1 00:25:44.124 Completion Queue Entry Size 00:25:44.124 Max: 1 00:25:44.124 Min: 1 00:25:44.124 Number of Namespaces: 0 00:25:44.124 Compare Command: Not Supported 00:25:44.124 Write Uncorrectable Command: Not Supported 00:25:44.124 Dataset Management Command: Not Supported 00:25:44.124 Write Zeroes Command: Not Supported 00:25:44.124 Set Features Save Field: Not Supported 00:25:44.124 Reservations: Not Supported 00:25:44.124 Timestamp: Not Supported 00:25:44.124 Copy: Not Supported 00:25:44.124 Volatile Write Cache: Not Present 00:25:44.124 Atomic Write Unit (Normal): 1 00:25:44.124 Atomic Write Unit (PFail): 1 00:25:44.124 Atomic Compare & Write Unit: 1 00:25:44.124 Fused Compare & Write: Supported 00:25:44.124 Scatter-Gather List 00:25:44.124 SGL Command Set: Supported 00:25:44.124 SGL Keyed: Supported 00:25:44.124 SGL Bit Bucket Descriptor: Not Supported 00:25:44.124 SGL Metadata Pointer: Not Supported 00:25:44.124 Oversized SGL: Not Supported 00:25:44.124 SGL Metadata Address: Not Supported 00:25:44.124 SGL Offset: Supported 00:25:44.124 Transport SGL Data Block: Not Supported 00:25:44.124 Replay Protected Memory Block: Not Supported 00:25:44.124 00:25:44.124 Firmware Slot Information 00:25:44.124 ========================= 00:25:44.124 Active slot: 0 00:25:44.124 00:25:44.124 00:25:44.124 Error Log 00:25:44.124 ========= 00:25:44.124 00:25:44.124 Active Namespaces 00:25:44.124 ================= 00:25:44.124 Discovery Log Page 00:25:44.124 ================== 00:25:44.124 Generation Counter: 2 00:25:44.124 Number of Records: 2 00:25:44.124 Record Format: 0 00:25:44.124 00:25:44.124 Discovery Log Entry 0 00:25:44.124 ---------------------- 00:25:44.124 Transport Type: 3 (TCP) 00:25:44.124 Address Family: 1 (IPv4) 00:25:44.124 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:44.124 Entry Flags: 00:25:44.124 Duplicate Returned Information: 1 00:25:44.124 Explicit Persistent Connection Support for Discovery: 1 00:25:44.124 Transport Requirements: 00:25:44.124 Secure Channel: Not Required 00:25:44.124 Port ID: 0 (0x0000) 00:25:44.124 Controller ID: 65535 (0xffff) 00:25:44.124 Admin Max SQ Size: 128 00:25:44.124 Transport Service Identifier: 4420 00:25:44.124 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:44.124 Transport Address: 10.0.0.2 00:25:44.124 Discovery Log Entry 1 00:25:44.124 ---------------------- 00:25:44.124 Transport Type: 3 (TCP) 00:25:44.124 Address Family: 1 (IPv4) 00:25:44.124 Subsystem Type: 2 (NVM Subsystem) 00:25:44.124 Entry Flags: 00:25:44.124 Duplicate Returned Information: 0 00:25:44.124 Explicit Persistent Connection Support for Discovery: 0 00:25:44.124 Transport Requirements: 00:25:44.124 Secure Channel: Not Required 00:25:44.124 Port ID: 0 (0x0000) 00:25:44.124 Controller ID: 65535 (0xffff) 00:25:44.124 Admin Max SQ Size: 128 00:25:44.124 Transport Service Identifier: 4420 00:25:44.124 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:44.124 Transport Address: 10.0.0.2 [2024-07-20 17:18:00.060273] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:44.124 [2024-07-20 17:18:00.060303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.124 [2024-07-20 17:18:00.060317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.124 [2024-07-20 17:18:00.060328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.124 [2024-07-20 17:18:00.060338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.124 [2024-07-20 17:18:00.060359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.124 [2024-07-20 17:18:00.060384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.124 [2024-07-20 17:18:00.060392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c55a0) 00:25:44.124 [2024-07-20 17:18:00.060410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.124 [2024-07-20 17:18:00.060440] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30800, cid 3, qid 0 00:25:44.124 [2024-07-20 17:18:00.060694] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.124 [2024-07-20 17:18:00.060711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.124 [2024-07-20 17:18:00.060719] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.124 [2024-07-20 17:18:00.060726] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30800) on tqpair=0x19c55a0 00:25:44.125 [2024-07-20 17:18:00.060741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.060749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.060756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c55a0) 00:25:44.125 [2024-07-20 17:18:00.060771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.064802] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30800, cid 3, qid 0 00:25:44.125 [2024-07-20 17:18:00.064825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.064837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.064844] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.064851] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30800) on tqpair=0x19c55a0 00:25:44.125 [2024-07-20 17:18:00.064862] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:44.125 [2024-07-20 17:18:00.064872] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:44.125 [2024-07-20 17:18:00.064890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.064899] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.064906] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19c55a0) 00:25:44.125 [2024-07-20 17:18:00.064917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.064940] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a30800, cid 3, qid 0 00:25:44.125 [2024-07-20 17:18:00.065153] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.065169] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.065177] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.065184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a30800) on tqpair=0x19c55a0 00:25:44.125 [2024-07-20 17:18:00.065201] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 0 milliseconds 00:25:44.125 00:25:44.125 17:18:00 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:44.125 [2024-07-20 17:18:00.094661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:44.125 [2024-07-20 17:18:00.094711] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623556 ] 00:25:44.125 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.125 [2024-07-20 17:18:00.130063] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:44.125 [2024-07-20 17:18:00.130134] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:44.125 [2024-07-20 17:18:00.130144] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:44.125 [2024-07-20 17:18:00.130159] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:44.125 [2024-07-20 17:18:00.130173] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:44.125 [2024-07-20 17:18:00.130561] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:44.125 [2024-07-20 17:18:00.130610] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19455a0 0 00:25:44.125 [2024-07-20 17:18:00.144809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:44.125 [2024-07-20 17:18:00.144828] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:44.125 [2024-07-20 17:18:00.144844] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:44.125 [2024-07-20 17:18:00.144851] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:44.125 [2024-07-20 17:18:00.144895] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.144906] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.144913] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.125 [2024-07-20 17:18:00.144928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:44.125 [2024-07-20 17:18:00.144954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.125 [2024-07-20 17:18:00.152808] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.152838] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.152846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.152853] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.125 [2024-07-20 17:18:00.152869] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:44.125 [2024-07-20 17:18:00.152880] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:44.125 [2024-07-20 17:18:00.152890] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:44.125 [2024-07-20 17:18:00.152908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.152917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.152925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.125 [2024-07-20 17:18:00.152936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.152961] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.125 [2024-07-20 17:18:00.153213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.153230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.153237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153244] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.125 [2024-07-20 17:18:00.153254] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:44.125 [2024-07-20 17:18:00.153268] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:44.125 [2024-07-20 17:18:00.153281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153289] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.125 [2024-07-20 17:18:00.153306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.153328] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.125 [2024-07-20 17:18:00.153561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.153576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.153584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153591] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.125 [2024-07-20 17:18:00.153601] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:44.125 [2024-07-20 17:18:00.153621] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:44.125 [2024-07-20 17:18:00.153635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153649] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.125 [2024-07-20 17:18:00.153659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.153681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.125 [2024-07-20 17:18:00.153914] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.153929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.153936] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153943] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.125 [2024-07-20 17:18:00.153953] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:44.125 [2024-07-20 17:18:00.153970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153979] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.153985] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.125 [2024-07-20 17:18:00.153996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.154017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.125 [2024-07-20 17:18:00.154242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.125 [2024-07-20 17:18:00.154255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.125 [2024-07-20 17:18:00.154262] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.154268] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.125 [2024-07-20 17:18:00.154279] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:44.125 [2024-07-20 17:18:00.154288] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:44.125 [2024-07-20 17:18:00.154301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:44.125 [2024-07-20 17:18:00.154411] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:44.125 [2024-07-20 17:18:00.154418] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:44.125 [2024-07-20 17:18:00.154433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.154440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.125 [2024-07-20 17:18:00.154447] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.125 [2024-07-20 17:18:00.154457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.125 [2024-07-20 17:18:00.154479] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.126 [2024-07-20 17:18:00.154708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.126 [2024-07-20 17:18:00.154723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.126 [2024-07-20 17:18:00.154731] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.154737] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.126 [2024-07-20 17:18:00.154752] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:44.126 [2024-07-20 17:18:00.154769] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.154779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.154785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.154804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.126 [2024-07-20 17:18:00.154828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.126 [2024-07-20 17:18:00.155057] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.126 [2024-07-20 17:18:00.155071] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.126 [2024-07-20 17:18:00.155079] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.155086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.126 [2024-07-20 17:18:00.155096] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:44.126 [2024-07-20 17:18:00.155106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.155120] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:44.126 [2024-07-20 17:18:00.155135] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.155149] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.155157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.155164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.155175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.126 [2024-07-20 17:18:00.155198] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.126 [2024-07-20 17:18:00.155483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.126 [2024-07-20 17:18:00.155496] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.126 [2024-07-20 17:18:00.155504] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.155512] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=4096, cccid=0 00:25:44.126 [2024-07-20 17:18:00.155521] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b03e0) on tqpair(0x19455a0): expected_datao=0, payload_size=4096 00:25:44.126 [2024-07-20 17:18:00.155605] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.155616] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.126 [2024-07-20 17:18:00.196048] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.126 [2024-07-20 17:18:00.196056] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196063] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.126 [2024-07-20 17:18:00.196077] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:44.126 [2024-07-20 17:18:00.196087] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:44.126 [2024-07-20 17:18:00.196095] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:44.126 [2024-07-20 17:18:00.196109] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:44.126 [2024-07-20 17:18:00.196117] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:44.126 [2024-07-20 17:18:00.196126] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.196147] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.196161] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.196187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.126 [2024-07-20 17:18:00.196211] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.126 [2024-07-20 17:18:00.196408] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.126 [2024-07-20 17:18:00.196425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.126 [2024-07-20 17:18:00.196432] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b03e0) on tqpair=0x19455a0 00:25:44.126 [2024-07-20 17:18:00.196453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196467] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.196477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.126 [2024-07-20 17:18:00.196488] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196495] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.196511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.126 [2024-07-20 17:18:00.196521] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196534] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.196559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.126 [2024-07-20 17:18:00.196569] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196582] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.196590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.126 [2024-07-20 17:18:00.196599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.196619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.196632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.196649] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.196660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.126 [2024-07-20 17:18:00.196684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b03e0, cid 0, qid 0 00:25:44.126 [2024-07-20 17:18:00.196711] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0540, cid 1, qid 0 00:25:44.126 [2024-07-20 17:18:00.196720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b06a0, cid 2, qid 0 00:25:44.126 [2024-07-20 17:18:00.196728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.126 [2024-07-20 17:18:00.196736] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.126 [2024-07-20 17:18:00.196993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.126 [2024-07-20 17:18:00.197007] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.126 [2024-07-20 17:18:00.197015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.197022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.126 [2024-07-20 17:18:00.197033] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:44.126 [2024-07-20 17:18:00.197042] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.197056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.197073] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.197085] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.197092] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.197099] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.126 [2024-07-20 17:18:00.197110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:44.126 [2024-07-20 17:18:00.197132] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.126 [2024-07-20 17:18:00.197365] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.126 [2024-07-20 17:18:00.197378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.126 [2024-07-20 17:18:00.197385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.197392] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.126 [2024-07-20 17:18:00.197457] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.197475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:44.126 [2024-07-20 17:18:00.197490] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.197498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.126 [2024-07-20 17:18:00.197504] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.127 [2024-07-20 17:18:00.197515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.127 [2024-07-20 17:18:00.197552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.127 [2024-07-20 17:18:00.197843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.127 [2024-07-20 17:18:00.197858] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.127 [2024-07-20 17:18:00.197870] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.197878] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=4096, cccid=4 00:25:44.127 [2024-07-20 17:18:00.197886] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0960) on tqpair(0x19455a0): expected_datao=0, payload_size=4096 00:25:44.127 [2024-07-20 17:18:00.197967] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.197976] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.239022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.127 [2024-07-20 17:18:00.239041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.127 [2024-07-20 17:18:00.239049] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.239056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.127 [2024-07-20 17:18:00.239085] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:44.127 [2024-07-20 17:18:00.239110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:44.127 [2024-07-20 17:18:00.239130] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:44.127 [2024-07-20 17:18:00.239143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.239151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.239158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.127 [2024-07-20 17:18:00.239169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.127 [2024-07-20 17:18:00.239192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.127 [2024-07-20 17:18:00.239421] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.127 [2024-07-20 17:18:00.239435] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.127 [2024-07-20 17:18:00.239443] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.239449] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=4096, cccid=4 00:25:44.127 [2024-07-20 17:18:00.239457] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0960) on tqpair(0x19455a0): expected_datao=0, payload_size=4096 00:25:44.127 [2024-07-20 17:18:00.239537] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.127 [2024-07-20 17:18:00.239546] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.281809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.281831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.281840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.281848] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.385 [2024-07-20 17:18:00.281874] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.281895] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.281911] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.281919] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.281926] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.385 [2024-07-20 17:18:00.281937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.385 [2024-07-20 17:18:00.281968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.385 [2024-07-20 17:18:00.282218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.385 [2024-07-20 17:18:00.282234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.385 [2024-07-20 17:18:00.282241] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282248] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=4096, cccid=4 00:25:44.385 [2024-07-20 17:18:00.282256] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0960) on tqpair(0x19455a0): expected_datao=0, payload_size=4096 00:25:44.385 [2024-07-20 17:18:00.282268] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282275] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.282392] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.282399] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282406] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.385 [2024-07-20 17:18:00.282422] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.282437] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.282454] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.282466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.282475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.282483] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:44.385 [2024-07-20 17:18:00.282491] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:44.385 [2024-07-20 17:18:00.282500] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:44.385 [2024-07-20 17:18:00.282534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282550] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.385 [2024-07-20 17:18:00.282561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.385 [2024-07-20 17:18:00.282573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19455a0) 00:25:44.385 [2024-07-20 17:18:00.282610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.385 [2024-07-20 17:18:00.282643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.385 [2024-07-20 17:18:00.282655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0ac0, cid 5, qid 0 00:25:44.385 [2024-07-20 17:18:00.282913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.282930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.282937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282948] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.385 [2024-07-20 17:18:00.282961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.282971] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.282979] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.282985] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0ac0) on tqpair=0x19455a0 00:25:44.385 [2024-07-20 17:18:00.283003] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19455a0) 00:25:44.385 [2024-07-20 17:18:00.283029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.385 [2024-07-20 17:18:00.283051] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0ac0, cid 5, qid 0 00:25:44.385 [2024-07-20 17:18:00.283279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.283295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.283302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283309] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0ac0) on tqpair=0x19455a0 00:25:44.385 [2024-07-20 17:18:00.283327] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283336] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19455a0) 00:25:44.385 [2024-07-20 17:18:00.283353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.385 [2024-07-20 17:18:00.283374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0ac0, cid 5, qid 0 00:25:44.385 [2024-07-20 17:18:00.283600] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.283612] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.283619] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283626] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0ac0) on tqpair=0x19455a0 00:25:44.385 [2024-07-20 17:18:00.283643] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283652] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.385 [2024-07-20 17:18:00.283659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19455a0) 00:25:44.385 [2024-07-20 17:18:00.283669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.385 [2024-07-20 17:18:00.283690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0ac0, cid 5, qid 0 00:25:44.385 [2024-07-20 17:18:00.283924] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.385 [2024-07-20 17:18:00.283938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.385 [2024-07-20 17:18:00.283946] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.283952] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0ac0) on tqpair=0x19455a0 00:25:44.386 [2024-07-20 17:18:00.283974] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.283984] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.283991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19455a0) 00:25:44.386 [2024-07-20 17:18:00.284001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.386 [2024-07-20 17:18:00.284018] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19455a0) 00:25:44.386 [2024-07-20 17:18:00.284042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.386 [2024-07-20 17:18:00.284054] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19455a0) 00:25:44.386 [2024-07-20 17:18:00.284077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.386 [2024-07-20 17:18:00.284088] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284096] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19455a0) 00:25:44.386 [2024-07-20 17:18:00.284112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.386 [2024-07-20 17:18:00.284135] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0ac0, cid 5, qid 0 00:25:44.386 [2024-07-20 17:18:00.284146] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0960, cid 4, qid 0 00:25:44.386 [2024-07-20 17:18:00.284154] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0c20, cid 6, qid 0 00:25:44.386 [2024-07-20 17:18:00.284162] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0d80, cid 7, qid 0 00:25:44.386 [2024-07-20 17:18:00.284473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.386 [2024-07-20 17:18:00.284486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.386 [2024-07-20 17:18:00.284493] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284500] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=8192, cccid=5 00:25:44.386 [2024-07-20 17:18:00.284508] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0ac0) on tqpair(0x19455a0): expected_datao=0, payload_size=8192 00:25:44.386 [2024-07-20 17:18:00.284734] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284745] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.386 [2024-07-20 17:18:00.284763] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.386 [2024-07-20 17:18:00.284770] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284777] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=512, cccid=4 00:25:44.386 [2024-07-20 17:18:00.284784] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0960) on tqpair(0x19455a0): expected_datao=0, payload_size=512 00:25:44.386 [2024-07-20 17:18:00.284802] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284811] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.386 [2024-07-20 17:18:00.284829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.386 [2024-07-20 17:18:00.284836] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284842] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=512, cccid=6 00:25:44.386 [2024-07-20 17:18:00.284854] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0c20) on tqpair(0x19455a0): expected_datao=0, payload_size=512 00:25:44.386 [2024-07-20 17:18:00.284865] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284872] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:44.386 [2024-07-20 17:18:00.284890] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:44.386 [2024-07-20 17:18:00.284897] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284903] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19455a0): datao=0, datal=4096, cccid=7 00:25:44.386 [2024-07-20 17:18:00.284911] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19b0d80) on tqpair(0x19455a0): expected_datao=0, payload_size=4096 00:25:44.386 [2024-07-20 17:18:00.284922] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284929] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284941] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.386 [2024-07-20 17:18:00.284951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.386 [2024-07-20 17:18:00.284958] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.284964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0ac0) on tqpair=0x19455a0 00:25:44.386 [2024-07-20 17:18:00.284987] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.386 [2024-07-20 17:18:00.284999] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.386 [2024-07-20 17:18:00.285006] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.285012] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0960) on tqpair=0x19455a0 00:25:44.386 [2024-07-20 17:18:00.285028] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.386 [2024-07-20 17:18:00.285039] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.386 [2024-07-20 17:18:00.285046] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.285053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0c20) on tqpair=0x19455a0 00:25:44.386 [2024-07-20 17:18:00.285065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.386 [2024-07-20 17:18:00.285075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.386 [2024-07-20 17:18:00.285082] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.386 [2024-07-20 17:18:00.285089] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0d80) on tqpair=0x19455a0 00:25:44.386 ===================================================== 00:25:44.386 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.386 ===================================================== 00:25:44.386 Controller Capabilities/Features 00:25:44.386 ================================ 00:25:44.386 Vendor ID: 8086 00:25:44.386 Subsystem Vendor ID: 8086 00:25:44.386 Serial Number: SPDK00000000000001 00:25:44.386 Model Number: SPDK bdev Controller 00:25:44.386 Firmware Version: 24.01.1 00:25:44.386 Recommended Arb Burst: 6 00:25:44.386 IEEE OUI Identifier: e4 d2 5c 00:25:44.386 Multi-path I/O 00:25:44.386 May have multiple subsystem ports: Yes 00:25:44.386 May have multiple controllers: Yes 00:25:44.386 Associated with SR-IOV VF: No 00:25:44.386 Max Data Transfer Size: 131072 00:25:44.386 Max Number of Namespaces: 32 00:25:44.386 Max Number of I/O Queues: 127 00:25:44.386 NVMe Specification Version (VS): 1.3 00:25:44.386 NVMe Specification Version (Identify): 1.3 00:25:44.386 Maximum Queue Entries: 128 00:25:44.386 Contiguous Queues Required: Yes 00:25:44.386 Arbitration Mechanisms Supported 00:25:44.386 Weighted Round Robin: Not Supported 00:25:44.386 Vendor Specific: Not Supported 00:25:44.386 Reset Timeout: 15000 ms 00:25:44.386 Doorbell Stride: 4 bytes 00:25:44.386 NVM Subsystem Reset: Not Supported 00:25:44.386 Command Sets Supported 00:25:44.386 NVM Command Set: Supported 00:25:44.386 Boot Partition: Not Supported 00:25:44.386 Memory Page Size Minimum: 4096 bytes 00:25:44.386 Memory Page Size Maximum: 4096 bytes 00:25:44.386 Persistent Memory Region: Not Supported 00:25:44.386 Optional Asynchronous Events Supported 00:25:44.386 Namespace Attribute Notices: Supported 00:25:44.386 Firmware Activation Notices: Not Supported 00:25:44.386 ANA Change Notices: Not Supported 00:25:44.386 PLE Aggregate Log Change Notices: Not Supported 00:25:44.386 LBA Status Info Alert Notices: Not Supported 00:25:44.386 EGE Aggregate Log Change Notices: Not Supported 00:25:44.386 Normal NVM Subsystem Shutdown event: Not Supported 00:25:44.386 Zone Descriptor Change Notices: Not Supported 00:25:44.386 Discovery Log Change Notices: Not Supported 00:25:44.386 Controller Attributes 00:25:44.386 128-bit Host Identifier: Supported 00:25:44.386 Non-Operational Permissive Mode: Not Supported 00:25:44.386 NVM Sets: Not Supported 00:25:44.386 Read Recovery Levels: Not Supported 00:25:44.386 Endurance Groups: Not Supported 00:25:44.386 Predictable Latency Mode: Not Supported 00:25:44.386 Traffic Based Keep ALive: Not Supported 00:25:44.386 Namespace Granularity: Not Supported 00:25:44.386 SQ Associations: Not Supported 00:25:44.386 UUID List: Not Supported 00:25:44.386 Multi-Domain Subsystem: Not Supported 00:25:44.386 Fixed Capacity Management: Not Supported 00:25:44.386 Variable Capacity Management: Not Supported 00:25:44.386 Delete Endurance Group: Not Supported 00:25:44.386 Delete NVM Set: Not Supported 00:25:44.386 Extended LBA Formats Supported: Not Supported 00:25:44.386 Flexible Data Placement Supported: Not Supported 00:25:44.386 00:25:44.386 Controller Memory Buffer Support 00:25:44.386 ================================ 00:25:44.386 Supported: No 00:25:44.386 00:25:44.386 Persistent Memory Region Support 00:25:44.386 ================================ 00:25:44.386 Supported: No 00:25:44.386 00:25:44.386 Admin Command Set Attributes 00:25:44.386 ============================ 00:25:44.386 Security Send/Receive: Not Supported 00:25:44.386 Format NVM: Not Supported 00:25:44.386 Firmware Activate/Download: Not Supported 00:25:44.386 Namespace Management: Not Supported 00:25:44.386 Device Self-Test: Not Supported 00:25:44.386 Directives: Not Supported 00:25:44.386 NVMe-MI: Not Supported 00:25:44.386 Virtualization Management: Not Supported 00:25:44.386 Doorbell Buffer Config: Not Supported 00:25:44.386 Get LBA Status Capability: Not Supported 00:25:44.386 Command & Feature Lockdown Capability: Not Supported 00:25:44.386 Abort Command Limit: 4 00:25:44.386 Async Event Request Limit: 4 00:25:44.386 Number of Firmware Slots: N/A 00:25:44.386 Firmware Slot 1 Read-Only: N/A 00:25:44.386 Firmware Activation Without Reset: N/A 00:25:44.386 Multiple Update Detection Support: N/A 00:25:44.386 Firmware Update Granularity: No Information Provided 00:25:44.386 Per-Namespace SMART Log: No 00:25:44.386 Asymmetric Namespace Access Log Page: Not Supported 00:25:44.386 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:44.386 Command Effects Log Page: Supported 00:25:44.386 Get Log Page Extended Data: Supported 00:25:44.386 Telemetry Log Pages: Not Supported 00:25:44.386 Persistent Event Log Pages: Not Supported 00:25:44.386 Supported Log Pages Log Page: May Support 00:25:44.386 Commands Supported & Effects Log Page: Not Supported 00:25:44.386 Feature Identifiers & Effects Log Page:May Support 00:25:44.386 NVMe-MI Commands & Effects Log Page: May Support 00:25:44.386 Data Area 4 for Telemetry Log: Not Supported 00:25:44.386 Error Log Page Entries Supported: 128 00:25:44.386 Keep Alive: Supported 00:25:44.386 Keep Alive Granularity: 10000 ms 00:25:44.386 00:25:44.386 NVM Command Set Attributes 00:25:44.386 ========================== 00:25:44.386 Submission Queue Entry Size 00:25:44.386 Max: 64 00:25:44.386 Min: 64 00:25:44.386 Completion Queue Entry Size 00:25:44.386 Max: 16 00:25:44.386 Min: 16 00:25:44.386 Number of Namespaces: 32 00:25:44.386 Compare Command: Supported 00:25:44.386 Write Uncorrectable Command: Not Supported 00:25:44.386 Dataset Management Command: Supported 00:25:44.386 Write Zeroes Command: Supported 00:25:44.386 Set Features Save Field: Not Supported 00:25:44.386 Reservations: Supported 00:25:44.386 Timestamp: Not Supported 00:25:44.386 Copy: Supported 00:25:44.386 Volatile Write Cache: Present 00:25:44.386 Atomic Write Unit (Normal): 1 00:25:44.386 Atomic Write Unit (PFail): 1 00:25:44.386 Atomic Compare & Write Unit: 1 00:25:44.386 Fused Compare & Write: Supported 00:25:44.386 Scatter-Gather List 00:25:44.386 SGL Command Set: Supported 00:25:44.386 SGL Keyed: Supported 00:25:44.386 SGL Bit Bucket Descriptor: Not Supported 00:25:44.386 SGL Metadata Pointer: Not Supported 00:25:44.386 Oversized SGL: Not Supported 00:25:44.386 SGL Metadata Address: Not Supported 00:25:44.386 SGL Offset: Supported 00:25:44.386 Transport SGL Data Block: Not Supported 00:25:44.386 Replay Protected Memory Block: Not Supported 00:25:44.386 00:25:44.386 Firmware Slot Information 00:25:44.386 ========================= 00:25:44.386 Active slot: 1 00:25:44.386 Slot 1 Firmware Revision: 24.01.1 00:25:44.386 00:25:44.386 00:25:44.386 Commands Supported and Effects 00:25:44.386 ============================== 00:25:44.386 Admin Commands 00:25:44.386 -------------- 00:25:44.386 Get Log Page (02h): Supported 00:25:44.386 Identify (06h): Supported 00:25:44.386 Abort (08h): Supported 00:25:44.386 Set Features (09h): Supported 00:25:44.386 Get Features (0Ah): Supported 00:25:44.386 Asynchronous Event Request (0Ch): Supported 00:25:44.386 Keep Alive (18h): Supported 00:25:44.386 I/O Commands 00:25:44.386 ------------ 00:25:44.386 Flush (00h): Supported LBA-Change 00:25:44.386 Write (01h): Supported LBA-Change 00:25:44.386 Read (02h): Supported 00:25:44.386 Compare (05h): Supported 00:25:44.386 Write Zeroes (08h): Supported LBA-Change 00:25:44.386 Dataset Management (09h): Supported LBA-Change 00:25:44.386 Copy (19h): Supported LBA-Change 00:25:44.386 Unknown (79h): Supported LBA-Change 00:25:44.386 Unknown (7Ah): Supported 00:25:44.386 00:25:44.386 Error Log 00:25:44.387 ========= 00:25:44.387 00:25:44.387 Arbitration 00:25:44.387 =========== 00:25:44.387 Arbitration Burst: 1 00:25:44.387 00:25:44.387 Power Management 00:25:44.387 ================ 00:25:44.387 Number of Power States: 1 00:25:44.387 Current Power State: Power State #0 00:25:44.387 Power State #0: 00:25:44.387 Max Power: 0.00 W 00:25:44.387 Non-Operational State: Operational 00:25:44.387 Entry Latency: Not Reported 00:25:44.387 Exit Latency: Not Reported 00:25:44.387 Relative Read Throughput: 0 00:25:44.387 Relative Read Latency: 0 00:25:44.387 Relative Write Throughput: 0 00:25:44.387 Relative Write Latency: 0 00:25:44.387 Idle Power: Not Reported 00:25:44.387 Active Power: Not Reported 00:25:44.387 Non-Operational Permissive Mode: Not Supported 00:25:44.387 00:25:44.387 Health Information 00:25:44.387 ================== 00:25:44.387 Critical Warnings: 00:25:44.387 Available Spare Space: OK 00:25:44.387 Temperature: OK 00:25:44.387 Device Reliability: OK 00:25:44.387 Read Only: No 00:25:44.387 Volatile Memory Backup: OK 00:25:44.387 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:44.387 Temperature Threshold: [2024-07-20 17:18:00.285217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.285248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.285271] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0d80, cid 7, qid 0 00:25:44.387 [2024-07-20 17:18:00.285507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.285523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.285530] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0d80) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.285579] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:44.387 [2024-07-20 17:18:00.285601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.387 [2024-07-20 17:18:00.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.387 [2024-07-20 17:18:00.285628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.387 [2024-07-20 17:18:00.285639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.387 [2024-07-20 17:18:00.285652] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285660] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.285678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.285699] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.285925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.285939] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.285947] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285954] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.285966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.285981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.285991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.286017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.286261] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.286274] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.286281] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286288] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.286297] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:44.387 [2024-07-20 17:18:00.286305] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:44.387 [2024-07-20 17:18:00.286321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286337] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.286347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.286367] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.286593] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.286605] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.286612] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286619] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.286636] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.286663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.286688] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.286919] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.286933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.286940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.286965] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.286981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.286992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.287013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.287237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.287250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.287257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.287263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.287281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.287290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.287297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.287307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.287327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.287548] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.287564] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.287571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.287578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.287596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.287606] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.287613] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.287623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.287644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.291812] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.291831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.291839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.291846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.291865] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.291875] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.291882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19455a0) 00:25:44.387 [2024-07-20 17:18:00.291893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.387 [2024-07-20 17:18:00.291921] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19b0800, cid 3, qid 0 00:25:44.387 [2024-07-20 17:18:00.292147] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:44.387 [2024-07-20 17:18:00.292159] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:44.387 [2024-07-20 17:18:00.292167] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:44.387 [2024-07-20 17:18:00.292174] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x19b0800) on tqpair=0x19455a0 00:25:44.387 [2024-07-20 17:18:00.292188] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:44.387 0 Kelvin (-273 Celsius) 00:25:44.387 Available Spare: 0% 00:25:44.387 Available Spare Threshold: 0% 00:25:44.387 Life Percentage Used: 0% 00:25:44.387 Data Units Read: 0 00:25:44.387 Data Units Written: 0 00:25:44.387 Host Read Commands: 0 00:25:44.387 Host Write Commands: 0 00:25:44.387 Controller Busy Time: 0 minutes 00:25:44.387 Power Cycles: 0 00:25:44.387 Power On Hours: 0 hours 00:25:44.387 Unsafe Shutdowns: 0 00:25:44.387 Unrecoverable Media Errors: 0 00:25:44.387 Lifetime Error Log Entries: 0 00:25:44.387 Warning Temperature Time: 0 minutes 00:25:44.387 Critical Temperature Time: 0 minutes 00:25:44.387 00:25:44.387 Number of Queues 00:25:44.387 ================ 00:25:44.387 Number of I/O Submission Queues: 127 00:25:44.387 Number of I/O Completion Queues: 127 00:25:44.387 00:25:44.387 Active Namespaces 00:25:44.387 ================= 00:25:44.387 Namespace ID:1 00:25:44.387 Error Recovery Timeout: Unlimited 00:25:44.387 Command Set Identifier: NVM (00h) 00:25:44.387 Deallocate: Supported 00:25:44.387 Deallocated/Unwritten Error: Not Supported 00:25:44.387 Deallocated Read Value: Unknown 00:25:44.387 Deallocate in Write Zeroes: Not Supported 00:25:44.387 Deallocated Guard Field: 0xFFFF 00:25:44.387 Flush: Supported 00:25:44.387 Reservation: Supported 00:25:44.387 Namespace Sharing Capabilities: Multiple Controllers 00:25:44.387 Size (in LBAs): 131072 (0GiB) 00:25:44.387 Capacity (in LBAs): 131072 (0GiB) 00:25:44.387 Utilization (in LBAs): 131072 (0GiB) 00:25:44.387 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:44.387 EUI64: ABCDEF0123456789 00:25:44.387 UUID: bb4d8b45-6867-40f1-bc77-be90389da99d 00:25:44.387 Thin Provisioning: Not Supported 00:25:44.387 Per-NS Atomic Units: Yes 00:25:44.387 Atomic Boundary Size (Normal): 0 00:25:44.387 Atomic Boundary Size (PFail): 0 00:25:44.387 Atomic Boundary Offset: 0 00:25:44.387 Maximum Single Source Range Length: 65535 00:25:44.387 Maximum Copy Length: 65535 00:25:44.387 Maximum Source Range Count: 1 00:25:44.387 NGUID/EUI64 Never Reused: No 00:25:44.387 Namespace Write Protected: No 00:25:44.387 Number of LBA Formats: 1 00:25:44.387 Current LBA Format: LBA Format #00 00:25:44.387 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:44.387 00:25:44.387 17:18:00 -- host/identify.sh@51 -- # sync 00:25:44.387 17:18:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.387 17:18:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.387 17:18:00 -- common/autotest_common.sh@10 -- # set +x 00:25:44.387 17:18:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.387 17:18:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:44.387 17:18:00 -- host/identify.sh@56 -- # nvmftestfini 00:25:44.387 17:18:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:44.387 17:18:00 -- nvmf/common.sh@116 -- # sync 00:25:44.387 17:18:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:44.387 17:18:00 -- nvmf/common.sh@119 -- # set +e 00:25:44.387 17:18:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:44.387 17:18:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:44.387 rmmod nvme_tcp 00:25:44.387 rmmod nvme_fabrics 00:25:44.387 rmmod nvme_keyring 00:25:44.387 17:18:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:44.387 17:18:00 -- nvmf/common.sh@123 -- # set -e 00:25:44.387 17:18:00 -- nvmf/common.sh@124 -- # return 0 00:25:44.387 17:18:00 -- nvmf/common.sh@477 -- # '[' -n 623378 ']' 00:25:44.387 17:18:00 -- nvmf/common.sh@478 -- # killprocess 623378 00:25:44.387 17:18:00 -- common/autotest_common.sh@926 -- # '[' -z 623378 ']' 00:25:44.387 17:18:00 -- common/autotest_common.sh@930 -- # kill -0 623378 00:25:44.387 17:18:00 -- common/autotest_common.sh@931 -- # uname 00:25:44.387 17:18:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:44.387 17:18:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 623378 00:25:44.387 17:18:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:44.387 17:18:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:44.387 17:18:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 623378' 00:25:44.387 killing process with pid 623378 00:25:44.387 17:18:00 -- common/autotest_common.sh@945 -- # kill 623378 00:25:44.387 [2024-07-20 17:18:00.387271] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:44.387 17:18:00 -- common/autotest_common.sh@950 -- # wait 623378 00:25:44.644 17:18:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:44.644 17:18:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:44.644 17:18:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:44.644 17:18:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:44.644 17:18:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:44.644 17:18:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.644 17:18:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.644 17:18:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.538 17:18:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:46.538 00:25:46.538 real 0m6.113s 00:25:46.538 user 0m7.424s 00:25:46.538 sys 0m1.875s 00:25:46.538 17:18:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.538 17:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.538 ************************************ 00:25:46.538 END TEST nvmf_identify 00:25:46.538 ************************************ 00:25:46.795 17:18:02 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:46.795 17:18:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:46.795 17:18:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.795 17:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:46.795 ************************************ 00:25:46.795 START TEST nvmf_perf 00:25:46.795 ************************************ 00:25:46.795 17:18:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:46.795 * Looking for test storage... 00:25:46.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.795 17:18:02 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.795 17:18:02 -- nvmf/common.sh@7 -- # uname -s 00:25:46.795 17:18:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.795 17:18:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.795 17:18:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.795 17:18:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.795 17:18:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.795 17:18:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.795 17:18:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.795 17:18:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.795 17:18:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.795 17:18:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.795 17:18:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.795 17:18:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:46.795 17:18:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.795 17:18:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.795 17:18:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.795 17:18:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.795 17:18:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.795 17:18:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.795 17:18:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.796 17:18:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.796 17:18:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.796 17:18:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.796 17:18:02 -- paths/export.sh@5 -- # export PATH 00:25:46.796 17:18:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.796 17:18:02 -- nvmf/common.sh@46 -- # : 0 00:25:46.796 17:18:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:46.796 17:18:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:46.796 17:18:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:46.796 17:18:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.796 17:18:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.796 17:18:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:46.796 17:18:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:46.796 17:18:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:46.796 17:18:02 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:46.796 17:18:02 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:46.796 17:18:02 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:46.796 17:18:02 -- host/perf.sh@17 -- # nvmftestinit 00:25:46.796 17:18:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:46.796 17:18:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.796 17:18:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:46.796 17:18:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:46.796 17:18:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:46.796 17:18:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.796 17:18:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.796 17:18:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.796 17:18:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:46.796 17:18:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:46.796 17:18:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:46.796 17:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:48.695 17:18:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:48.695 17:18:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:48.695 17:18:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:48.695 17:18:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:48.695 17:18:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:48.695 17:18:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:48.695 17:18:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:48.695 17:18:04 -- nvmf/common.sh@294 -- # net_devs=() 00:25:48.695 17:18:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:48.695 17:18:04 -- nvmf/common.sh@295 -- # e810=() 00:25:48.695 17:18:04 -- nvmf/common.sh@295 -- # local -ga e810 00:25:48.695 17:18:04 -- nvmf/common.sh@296 -- # x722=() 00:25:48.695 17:18:04 -- nvmf/common.sh@296 -- # local -ga x722 00:25:48.695 17:18:04 -- nvmf/common.sh@297 -- # mlx=() 00:25:48.695 17:18:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:48.695 17:18:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.695 17:18:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:48.695 17:18:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:48.695 17:18:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:48.695 17:18:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.695 17:18:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:48.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:48.695 17:18:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.695 17:18:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:48.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:48.695 17:18:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:48.695 17:18:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.695 17:18:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.695 17:18:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.695 17:18:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.695 17:18:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:48.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:48.695 17:18:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.695 17:18:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.695 17:18:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.695 17:18:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.695 17:18:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.695 17:18:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:48.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:48.695 17:18:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.695 17:18:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:48.695 17:18:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:48.695 17:18:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:48.695 17:18:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.695 17:18:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.695 17:18:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.695 17:18:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:48.695 17:18:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.695 17:18:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.695 17:18:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:48.695 17:18:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.695 17:18:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.695 17:18:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:48.695 17:18:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:48.695 17:18:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.695 17:18:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.695 17:18:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.695 17:18:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.695 17:18:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:48.695 17:18:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.695 17:18:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.695 17:18:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.695 17:18:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:48.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:25:48.695 00:25:48.695 --- 10.0.0.2 ping statistics --- 00:25:48.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.695 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:48.695 17:18:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:48.695 00:25:48.695 --- 10.0.0.1 ping statistics --- 00:25:48.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.695 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:48.695 17:18:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.695 17:18:04 -- nvmf/common.sh@410 -- # return 0 00:25:48.695 17:18:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:48.695 17:18:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.695 17:18:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:48.695 17:18:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.695 17:18:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:48.695 17:18:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:48.695 17:18:04 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:48.695 17:18:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:48.695 17:18:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:48.695 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.695 17:18:04 -- nvmf/common.sh@469 -- # nvmfpid=625599 00:25:48.695 17:18:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:48.695 17:18:04 -- nvmf/common.sh@470 -- # waitforlisten 625599 00:25:48.695 17:18:04 -- common/autotest_common.sh@819 -- # '[' -z 625599 ']' 00:25:48.695 17:18:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.695 17:18:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:48.695 17:18:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.695 17:18:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:48.695 17:18:04 -- common/autotest_common.sh@10 -- # set +x 00:25:48.696 [2024-07-20 17:18:04.816643] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:48.696 [2024-07-20 17:18:04.816716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.696 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.953 [2024-07-20 17:18:04.883737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:48.953 [2024-07-20 17:18:04.972330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.953 [2024-07-20 17:18:04.972502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.953 [2024-07-20 17:18:04.972529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.953 [2024-07-20 17:18:04.972543] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.953 [2024-07-20 17:18:04.972616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.953 [2024-07-20 17:18:04.972673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.953 [2024-07-20 17:18:04.972721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.953 [2024-07-20 17:18:04.972724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.882 17:18:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:49.882 17:18:05 -- common/autotest_common.sh@852 -- # return 0 00:25:49.882 17:18:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:49.882 17:18:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:49.882 17:18:05 -- common/autotest_common.sh@10 -- # set +x 00:25:49.882 17:18:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.882 17:18:05 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:49.882 17:18:05 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:53.170 17:18:08 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:53.170 17:18:08 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:53.170 17:18:09 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:53.170 17:18:09 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:53.427 17:18:09 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:53.427 17:18:09 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:53.427 17:18:09 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:53.427 17:18:09 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:53.427 17:18:09 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:53.684 [2024-07-20 17:18:09.607951] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.684 17:18:09 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:53.941 17:18:09 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:53.941 17:18:09 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:54.198 17:18:10 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:54.198 17:18:10 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:54.198 17:18:10 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.455 [2024-07-20 17:18:10.567599] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.455 17:18:10 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:54.713 17:18:10 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:54.713 17:18:10 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:54.713 17:18:10 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:54.713 17:18:10 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:56.084 Initializing NVMe Controllers 00:25:56.084 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:56.084 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:56.084 Initialization complete. Launching workers. 00:25:56.084 ======================================================== 00:25:56.084 Latency(us) 00:25:56.084 Device Information : IOPS MiB/s Average min max 00:25:56.084 PCIE (0000:88:00.0) NSID 1 from core 0: 87197.74 340.62 366.52 33.66 7335.08 00:25:56.084 ======================================================== 00:25:56.084 Total : 87197.74 340.62 366.52 33.66 7335.08 00:25:56.084 00:25:56.084 17:18:12 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:56.084 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.454 Initializing NVMe Controllers 00:25:57.454 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:57.454 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:57.454 Initialization complete. Launching workers. 00:25:57.454 ======================================================== 00:25:57.454 Latency(us) 00:25:57.454 Device Information : IOPS MiB/s Average min max 00:25:57.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.75 0.39 10216.53 303.93 45015.23 00:25:57.454 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.86 0.22 18649.60 5973.28 47912.37 00:25:57.454 ======================================================== 00:25:57.454 Total : 155.61 0.61 13243.79 303.93 47912.37 00:25:57.454 00:25:57.454 17:18:13 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:57.454 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.385 Initializing NVMe Controllers 00:25:58.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:58.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:58.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:58.385 Initialization complete. Launching workers. 00:25:58.385 ======================================================== 00:25:58.385 Latency(us) 00:25:58.385 Device Information : IOPS MiB/s Average min max 00:25:58.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7277.99 28.43 4409.83 780.94 8331.30 00:25:58.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3933.99 15.37 8170.18 5713.11 15519.66 00:25:58.385 ======================================================== 00:25:58.385 Total : 11211.98 43.80 5729.24 780.94 15519.66 00:25:58.385 00:25:58.385 17:18:14 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:58.385 17:18:14 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:58.385 17:18:14 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:58.385 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.913 Initializing NVMe Controllers 00:26:00.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.913 Controller IO queue size 128, less than required. 00:26:00.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.913 Controller IO queue size 128, less than required. 00:26:00.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:00.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:00.913 Initialization complete. Launching workers. 00:26:00.913 ======================================================== 00:26:00.913 Latency(us) 00:26:00.913 Device Information : IOPS MiB/s Average min max 00:26:00.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 654.88 163.72 202464.14 99085.67 303373.23 00:26:00.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 552.05 138.01 240270.91 126903.99 365568.57 00:26:00.913 ======================================================== 00:26:00.913 Total : 1206.93 301.73 219757.06 99085.67 365568.57 00:26:00.913 00:26:00.913 17:18:17 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:00.913 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.170 No valid NVMe controllers or AIO or URING devices found 00:26:01.170 Initializing NVMe Controllers 00:26:01.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:01.170 Controller IO queue size 128, less than required. 00:26:01.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.170 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:01.170 Controller IO queue size 128, less than required. 00:26:01.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:01.170 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:01.170 WARNING: Some requested NVMe devices were skipped 00:26:01.426 17:18:17 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:01.426 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.961 Initializing NVMe Controllers 00:26:03.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.961 Controller IO queue size 128, less than required. 00:26:03.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.961 Controller IO queue size 128, less than required. 00:26:03.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:03.961 Initialization complete. Launching workers. 00:26:03.961 00:26:03.961 ==================== 00:26:03.961 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:03.961 TCP transport: 00:26:03.961 polls: 36633 00:26:03.961 idle_polls: 12336 00:26:03.961 sock_completions: 24297 00:26:03.961 nvme_completions: 2503 00:26:03.961 submitted_requests: 3909 00:26:03.961 queued_requests: 1 00:26:03.961 00:26:03.961 ==================== 00:26:03.961 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:03.961 TCP transport: 00:26:03.961 polls: 36878 00:26:03.961 idle_polls: 13317 00:26:03.961 sock_completions: 23561 00:26:03.961 nvme_completions: 2197 00:26:03.961 submitted_requests: 3443 00:26:03.961 queued_requests: 1 00:26:03.961 ======================================================== 00:26:03.961 Latency(us) 00:26:03.961 Device Information : IOPS MiB/s Average min max 00:26:03.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 687.71 171.93 197199.11 123340.59 310879.80 00:26:03.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.91 152.73 216376.14 111543.29 331253.00 00:26:03.961 ======================================================== 00:26:03.961 Total : 1298.62 324.65 206220.56 111543.29 331253.00 00:26:03.961 00:26:03.961 17:18:19 -- host/perf.sh@66 -- # sync 00:26:03.961 17:18:19 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.289 17:18:20 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:04.289 17:18:20 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:26:04.289 17:18:20 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:07.568 17:18:23 -- host/perf.sh@72 -- # ls_guid=bbc6d59d-a659-4c6a-9252-06674096a480 00:26:07.568 17:18:23 -- host/perf.sh@73 -- # get_lvs_free_mb bbc6d59d-a659-4c6a-9252-06674096a480 00:26:07.568 17:18:23 -- common/autotest_common.sh@1343 -- # local lvs_uuid=bbc6d59d-a659-4c6a-9252-06674096a480 00:26:07.568 17:18:23 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:07.568 17:18:23 -- common/autotest_common.sh@1345 -- # local fc 00:26:07.568 17:18:23 -- common/autotest_common.sh@1346 -- # local cs 00:26:07.568 17:18:23 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:07.568 17:18:23 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:07.568 { 00:26:07.568 "uuid": "bbc6d59d-a659-4c6a-9252-06674096a480", 00:26:07.568 "name": "lvs_0", 00:26:07.568 "base_bdev": "Nvme0n1", 00:26:07.568 "total_data_clusters": 238234, 00:26:07.568 "free_clusters": 238234, 00:26:07.568 "block_size": 512, 00:26:07.568 "cluster_size": 4194304 00:26:07.568 } 00:26:07.568 ]' 00:26:07.568 17:18:23 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="bbc6d59d-a659-4c6a-9252-06674096a480") .free_clusters' 00:26:07.568 17:18:23 -- common/autotest_common.sh@1348 -- # fc=238234 00:26:07.568 17:18:23 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="bbc6d59d-a659-4c6a-9252-06674096a480") .cluster_size' 00:26:07.568 17:18:23 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:07.568 17:18:23 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:26:07.568 17:18:23 -- common/autotest_common.sh@1353 -- # echo 952936 00:26:07.568 952936 00:26:07.568 17:18:23 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:07.568 17:18:23 -- host/perf.sh@78 -- # free_mb=20480 00:26:07.568 17:18:23 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bbc6d59d-a659-4c6a-9252-06674096a480 lbd_0 20480 00:26:08.132 17:18:24 -- host/perf.sh@80 -- # lb_guid=05e66b77-5b09-48ba-b025-8eb07f647ec7 00:26:08.132 17:18:24 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 05e66b77-5b09-48ba-b025-8eb07f647ec7 lvs_n_0 00:26:09.062 17:18:24 -- host/perf.sh@83 -- # ls_nested_guid=cb34453b-bdb8-4d52-a9e5-3560ad8d2839 00:26:09.062 17:18:24 -- host/perf.sh@84 -- # get_lvs_free_mb cb34453b-bdb8-4d52-a9e5-3560ad8d2839 00:26:09.062 17:18:24 -- common/autotest_common.sh@1343 -- # local lvs_uuid=cb34453b-bdb8-4d52-a9e5-3560ad8d2839 00:26:09.062 17:18:24 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:09.062 17:18:24 -- common/autotest_common.sh@1345 -- # local fc 00:26:09.062 17:18:24 -- common/autotest_common.sh@1346 -- # local cs 00:26:09.062 17:18:24 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:09.062 17:18:25 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:09.062 { 00:26:09.062 "uuid": "bbc6d59d-a659-4c6a-9252-06674096a480", 00:26:09.062 "name": "lvs_0", 00:26:09.062 "base_bdev": "Nvme0n1", 00:26:09.062 "total_data_clusters": 238234, 00:26:09.062 "free_clusters": 233114, 00:26:09.062 "block_size": 512, 00:26:09.062 "cluster_size": 4194304 00:26:09.062 }, 00:26:09.062 { 00:26:09.062 "uuid": "cb34453b-bdb8-4d52-a9e5-3560ad8d2839", 00:26:09.062 "name": "lvs_n_0", 00:26:09.062 "base_bdev": "05e66b77-5b09-48ba-b025-8eb07f647ec7", 00:26:09.062 "total_data_clusters": 5114, 00:26:09.062 "free_clusters": 5114, 00:26:09.062 "block_size": 512, 00:26:09.062 "cluster_size": 4194304 00:26:09.062 } 00:26:09.062 ]' 00:26:09.318 17:18:25 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="cb34453b-bdb8-4d52-a9e5-3560ad8d2839") .free_clusters' 00:26:09.318 17:18:25 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:09.318 17:18:25 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="cb34453b-bdb8-4d52-a9e5-3560ad8d2839") .cluster_size' 00:26:09.318 17:18:25 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:09.318 17:18:25 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:09.318 17:18:25 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:09.318 20456 00:26:09.318 17:18:25 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:09.319 17:18:25 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb34453b-bdb8-4d52-a9e5-3560ad8d2839 lbd_nest_0 20456 00:26:09.574 17:18:25 -- host/perf.sh@88 -- # lb_nested_guid=535a7834-1e52-47fa-8bcd-834361a714cb 00:26:09.574 17:18:25 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.831 17:18:25 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:09.831 17:18:25 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 535a7834-1e52-47fa-8bcd-834361a714cb 00:26:10.088 17:18:26 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.345 17:18:26 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:10.345 17:18:26 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:10.345 17:18:26 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:10.345 17:18:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:10.345 17:18:26 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:10.345 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.531 Initializing NVMe Controllers 00:26:22.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:22.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:22.531 Initialization complete. Launching workers. 00:26:22.531 ======================================================== 00:26:22.531 Latency(us) 00:26:22.531 Device Information : IOPS MiB/s Average min max 00:26:22.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.39 0.02 21116.56 289.92 44998.30 00:26:22.531 ======================================================== 00:26:22.531 Total : 47.39 0.02 21116.56 289.92 44998.30 00:26:22.531 00:26:22.531 17:18:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:22.531 17:18:36 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:22.531 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.493 Initializing NVMe Controllers 00:26:32.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:32.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:32.493 Initialization complete. Launching workers. 00:26:32.493 ======================================================== 00:26:32.493 Latency(us) 00:26:32.493 Device Information : IOPS MiB/s Average min max 00:26:32.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.49 10.31 12141.84 5037.06 47886.23 00:26:32.493 ======================================================== 00:26:32.493 Total : 82.49 10.31 12141.84 5037.06 47886.23 00:26:32.493 00:26:32.493 17:18:47 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:32.493 17:18:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:32.493 17:18:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:32.493 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.501 Initializing NVMe Controllers 00:26:42.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:42.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:42.501 Initialization complete. Launching workers. 00:26:42.501 ======================================================== 00:26:42.501 Latency(us) 00:26:42.501 Device Information : IOPS MiB/s Average min max 00:26:42.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6622.02 3.23 4842.63 371.99 47843.42 00:26:42.501 ======================================================== 00:26:42.501 Total : 6622.02 3.23 4842.63 371.99 47843.42 00:26:42.501 00:26:42.501 17:18:57 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:42.502 17:18:57 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.502 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.463 Initializing NVMe Controllers 00:26:52.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.463 Initialization complete. Launching workers. 00:26:52.463 ======================================================== 00:26:52.463 Latency(us) 00:26:52.463 Device Information : IOPS MiB/s Average min max 00:26:52.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1248.92 156.11 25641.42 2288.68 53809.44 00:26:52.463 ======================================================== 00:26:52.463 Total : 1248.92 156.11 25641.42 2288.68 53809.44 00:26:52.463 00:26:52.463 17:19:07 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:52.463 17:19:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:52.463 17:19:07 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:52.463 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.421 Initializing NVMe Controllers 00:27:02.421 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.421 Controller IO queue size 128, less than required. 00:27:02.421 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.421 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.421 Initialization complete. Launching workers. 00:27:02.421 ======================================================== 00:27:02.421 Latency(us) 00:27:02.421 Device Information : IOPS MiB/s Average min max 00:27:02.421 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11995.20 5.86 10673.55 1723.13 26190.83 00:27:02.421 ======================================================== 00:27:02.421 Total : 11995.20 5.86 10673.55 1723.13 26190.83 00:27:02.421 00:27:02.421 17:19:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:02.421 17:19:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:02.421 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.621 Initializing NVMe Controllers 00:27:14.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.621 Controller IO queue size 128, less than required. 00:27:14.621 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:14.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:14.621 Initialization complete. Launching workers. 00:27:14.621 ======================================================== 00:27:14.621 Latency(us) 00:27:14.621 Device Information : IOPS MiB/s Average min max 00:27:14.621 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1014.90 126.86 126464.95 21963.00 279365.85 00:27:14.621 ======================================================== 00:27:14.621 Total : 1014.90 126.86 126464.95 21963.00 279365.85 00:27:14.621 00:27:14.621 17:19:28 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.621 17:19:29 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 535a7834-1e52-47fa-8bcd-834361a714cb 00:27:14.621 17:19:29 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:14.621 17:19:30 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 05e66b77-5b09-48ba-b025-8eb07f647ec7 00:27:14.621 17:19:30 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:14.621 17:19:30 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:14.621 17:19:30 -- host/perf.sh@114 -- # nvmftestfini 00:27:14.621 17:19:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:14.621 17:19:30 -- nvmf/common.sh@116 -- # sync 00:27:14.621 17:19:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:14.621 17:19:30 -- nvmf/common.sh@119 -- # set +e 00:27:14.621 17:19:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:14.621 17:19:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:14.880 rmmod nvme_tcp 00:27:14.881 rmmod nvme_fabrics 00:27:14.881 rmmod nvme_keyring 00:27:14.881 17:19:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:14.881 17:19:30 -- nvmf/common.sh@123 -- # set -e 00:27:14.881 17:19:30 -- nvmf/common.sh@124 -- # return 0 00:27:14.881 17:19:30 -- nvmf/common.sh@477 -- # '[' -n 625599 ']' 00:27:14.881 17:19:30 -- nvmf/common.sh@478 -- # killprocess 625599 00:27:14.881 17:19:30 -- common/autotest_common.sh@926 -- # '[' -z 625599 ']' 00:27:14.881 17:19:30 -- common/autotest_common.sh@930 -- # kill -0 625599 00:27:14.881 17:19:30 -- common/autotest_common.sh@931 -- # uname 00:27:14.881 17:19:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:14.881 17:19:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 625599 00:27:14.881 17:19:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:14.881 17:19:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:14.881 17:19:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 625599' 00:27:14.881 killing process with pid 625599 00:27:14.881 17:19:30 -- common/autotest_common.sh@945 -- # kill 625599 00:27:14.881 17:19:30 -- common/autotest_common.sh@950 -- # wait 625599 00:27:16.274 17:19:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:16.274 17:19:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:16.274 17:19:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:16.274 17:19:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.274 17:19:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:16.274 17:19:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.274 17:19:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.274 17:19:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.809 17:19:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:18.809 00:27:18.809 real 1m31.760s 00:27:18.809 user 5m39.704s 00:27:18.809 sys 0m14.629s 00:27:18.809 17:19:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.809 17:19:34 -- common/autotest_common.sh@10 -- # set +x 00:27:18.809 ************************************ 00:27:18.809 END TEST nvmf_perf 00:27:18.809 ************************************ 00:27:18.809 17:19:34 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:18.809 17:19:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:18.809 17:19:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:18.809 17:19:34 -- common/autotest_common.sh@10 -- # set +x 00:27:18.809 ************************************ 00:27:18.809 START TEST nvmf_fio_host 00:27:18.809 ************************************ 00:27:18.809 17:19:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:18.809 * Looking for test storage... 00:27:18.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:18.809 17:19:34 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.809 17:19:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.809 17:19:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.810 17:19:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.810 17:19:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- paths/export.sh@5 -- # export PATH 00:27:18.810 17:19:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:18.810 17:19:34 -- nvmf/common.sh@7 -- # uname -s 00:27:18.810 17:19:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.810 17:19:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.810 17:19:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.810 17:19:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.810 17:19:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.810 17:19:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.810 17:19:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.810 17:19:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.810 17:19:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.810 17:19:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.810 17:19:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.810 17:19:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:18.810 17:19:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.810 17:19:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.810 17:19:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.810 17:19:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:18.810 17:19:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.810 17:19:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.810 17:19:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.810 17:19:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- paths/export.sh@5 -- # export PATH 00:27:18.810 17:19:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.810 17:19:34 -- nvmf/common.sh@46 -- # : 0 00:27:18.810 17:19:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:18.810 17:19:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:18.810 17:19:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:18.810 17:19:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.810 17:19:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.810 17:19:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:18.810 17:19:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:18.810 17:19:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:18.810 17:19:34 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:18.810 17:19:34 -- host/fio.sh@14 -- # nvmftestinit 00:27:18.810 17:19:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:18.810 17:19:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.810 17:19:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:18.810 17:19:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:18.810 17:19:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:18.810 17:19:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.810 17:19:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.810 17:19:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.810 17:19:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:18.810 17:19:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:18.810 17:19:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:18.810 17:19:34 -- common/autotest_common.sh@10 -- # set +x 00:27:20.713 17:19:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:20.713 17:19:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:20.713 17:19:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:20.713 17:19:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:20.713 17:19:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:20.713 17:19:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:20.713 17:19:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:20.713 17:19:36 -- nvmf/common.sh@294 -- # net_devs=() 00:27:20.713 17:19:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:20.713 17:19:36 -- nvmf/common.sh@295 -- # e810=() 00:27:20.713 17:19:36 -- nvmf/common.sh@295 -- # local -ga e810 00:27:20.713 17:19:36 -- nvmf/common.sh@296 -- # x722=() 00:27:20.713 17:19:36 -- nvmf/common.sh@296 -- # local -ga x722 00:27:20.713 17:19:36 -- nvmf/common.sh@297 -- # mlx=() 00:27:20.713 17:19:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:20.713 17:19:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.713 17:19:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.714 17:19:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.714 17:19:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.714 17:19:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.714 17:19:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:20.714 17:19:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:20.714 17:19:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:20.714 17:19:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:20.714 17:19:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:20.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:20.714 17:19:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:20.714 17:19:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:20.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:20.714 17:19:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:20.714 17:19:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:20.714 17:19:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.714 17:19:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:20.714 17:19:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.714 17:19:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:20.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:20.714 17:19:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.714 17:19:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:20.714 17:19:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.714 17:19:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:20.714 17:19:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.714 17:19:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:20.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:20.714 17:19:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.714 17:19:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:20.714 17:19:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:20.714 17:19:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:20.714 17:19:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.714 17:19:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.714 17:19:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.714 17:19:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:20.714 17:19:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.714 17:19:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.714 17:19:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:20.714 17:19:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.714 17:19:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.714 17:19:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:20.714 17:19:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:20.714 17:19:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.714 17:19:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.714 17:19:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.714 17:19:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.714 17:19:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:20.714 17:19:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.714 17:19:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.714 17:19:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.714 17:19:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:20.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:27:20.714 00:27:20.714 --- 10.0.0.2 ping statistics --- 00:27:20.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.714 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:27:20.714 17:19:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:27:20.714 00:27:20.714 --- 10.0.0.1 ping statistics --- 00:27:20.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.714 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:20.714 17:19:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.714 17:19:36 -- nvmf/common.sh@410 -- # return 0 00:27:20.714 17:19:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:20.714 17:19:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.714 17:19:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:20.714 17:19:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.714 17:19:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:20.714 17:19:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:20.714 17:19:36 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:20.714 17:19:36 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:20.714 17:19:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:20.714 17:19:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.714 17:19:36 -- host/fio.sh@24 -- # nvmfpid=638536 00:27:20.714 17:19:36 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:20.714 17:19:36 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.714 17:19:36 -- host/fio.sh@28 -- # waitforlisten 638536 00:27:20.714 17:19:36 -- common/autotest_common.sh@819 -- # '[' -z 638536 ']' 00:27:20.714 17:19:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.714 17:19:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:20.714 17:19:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.714 17:19:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:20.714 17:19:36 -- common/autotest_common.sh@10 -- # set +x 00:27:20.714 [2024-07-20 17:19:36.648757] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:20.714 [2024-07-20 17:19:36.648872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.714 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.714 [2024-07-20 17:19:36.718326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:20.714 [2024-07-20 17:19:36.808071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:20.714 [2024-07-20 17:19:36.808229] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.714 [2024-07-20 17:19:36.808249] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.714 [2024-07-20 17:19:36.808264] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.714 [2024-07-20 17:19:36.808323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.714 [2024-07-20 17:19:36.808380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.714 [2024-07-20 17:19:36.808495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:20.714 [2024-07-20 17:19:36.808497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.648 17:19:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:21.648 17:19:37 -- common/autotest_common.sh@852 -- # return 0 00:27:21.648 17:19:37 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:21.930 [2024-07-20 17:19:37.842177] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.930 17:19:37 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:21.930 17:19:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:21.930 17:19:37 -- common/autotest_common.sh@10 -- # set +x 00:27:21.930 17:19:37 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:22.187 Malloc1 00:27:22.187 17:19:38 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.447 17:19:38 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:22.705 17:19:38 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.705 [2024-07-20 17:19:38.826309] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.705 17:19:38 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.963 17:19:39 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:22.963 17:19:39 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.963 17:19:39 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.963 17:19:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:22.963 17:19:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.963 17:19:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:22.963 17:19:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.963 17:19:39 -- common/autotest_common.sh@1320 -- # shift 00:27:22.963 17:19:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:22.963 17:19:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:22.963 17:19:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:22.963 17:19:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:22.963 17:19:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:22.963 17:19:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:22.963 17:19:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:22.963 17:19:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:23.221 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:23.221 fio-3.35 00:27:23.221 Starting 1 thread 00:27:23.221 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.747 00:27:25.747 test: (groupid=0, jobs=1): err= 0: pid=639033: Sat Jul 20 17:19:41 2024 00:27:25.747 read: IOPS=9432, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2003msec) 00:27:25.747 slat (nsec): min=1965, max=112088, avg=2525.57, stdev=1670.93 00:27:25.747 clat (usec): min=2817, max=14011, avg=8094.29, stdev=1364.92 00:27:25.747 lat (usec): min=2819, max=14014, avg=8096.82, stdev=1364.88 00:27:25.747 clat percentiles (usec): 00:27:25.747 | 1.00th=[ 5538], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 6980], 00:27:25.747 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8160], 00:27:25.747 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[10683], 00:27:25.747 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13042], 99.95th=[13173], 00:27:25.747 | 99.99th=[13960] 00:27:25.747 bw ( KiB/s): min=36488, max=39104, per=99.73%, avg=37628.00, stdev=1087.62, samples=4 00:27:25.747 iops : min= 9122, max= 9776, avg=9407.00, stdev=271.90, samples=4 00:27:25.747 write: IOPS=9433, BW=36.9MiB/s (38.6MB/s)(73.8MiB/2003msec); 0 zone resets 00:27:25.747 slat (nsec): min=2065, max=89710, avg=2688.30, stdev=1314.90 00:27:25.747 clat (usec): min=1508, max=8598, avg=5422.41, stdev=877.79 00:27:25.747 lat (usec): min=1514, max=8619, avg=5425.10, stdev=877.85 00:27:25.747 clat percentiles (usec): 00:27:25.747 | 1.00th=[ 3326], 5.00th=[ 3884], 10.00th=[ 4228], 20.00th=[ 4621], 00:27:25.747 | 30.00th=[ 5014], 40.00th=[ 5276], 50.00th=[ 5538], 60.00th=[ 5735], 00:27:25.747 | 70.00th=[ 5932], 80.00th=[ 6128], 90.00th=[ 6456], 95.00th=[ 6783], 00:27:25.747 | 99.00th=[ 7242], 99.50th=[ 7504], 99.90th=[ 7898], 99.95th=[ 8455], 00:27:25.747 | 99.99th=[ 8586] 00:27:25.747 bw ( KiB/s): min=37376, max=38120, per=99.90%, avg=37696.00, stdev=309.77, samples=4 00:27:25.747 iops : min= 9344, max= 9530, avg=9424.00, stdev=77.44, samples=4 00:27:25.747 lat (msec) : 2=0.01%, 4=3.25%, 10=91.68%, 20=5.06% 00:27:25.747 cpu : usr=66.68%, sys=27.57%, ctx=24, majf=0, minf=5 00:27:25.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:25.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:25.747 issued rwts: total=18893,18896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:25.747 00:27:25.747 Run status group 0 (all jobs): 00:27:25.747 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2003-2003msec 00:27:25.747 WRITE: bw=36.9MiB/s (38.6MB/s), 36.9MiB/s-36.9MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2003-2003msec 00:27:25.747 17:19:41 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:25.747 17:19:41 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:25.747 17:19:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:25.747 17:19:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.747 17:19:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:25.747 17:19:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.747 17:19:41 -- common/autotest_common.sh@1320 -- # shift 00:27:25.747 17:19:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:25.747 17:19:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.747 17:19:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.747 17:19:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:25.748 17:19:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:25.748 17:19:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:25.748 17:19:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:25.748 17:19:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.748 17:19:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.748 17:19:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:25.748 17:19:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:25.748 17:19:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:25.748 17:19:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:25.748 17:19:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:25.748 17:19:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:26.007 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:26.007 fio-3.35 00:27:26.007 Starting 1 thread 00:27:26.007 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.533 00:27:28.533 test: (groupid=0, jobs=1): err= 0: pid=639370: Sat Jul 20 17:19:44 2024 00:27:28.533 read: IOPS=5676, BW=88.7MiB/s (93.0MB/s)(178MiB/2007msec) 00:27:28.533 slat (nsec): min=2716, max=92541, avg=3699.81, stdev=1930.00 00:27:28.533 clat (usec): min=3977, max=43564, avg=14093.02, stdev=3782.40 00:27:28.533 lat (usec): min=3980, max=43567, avg=14096.72, stdev=3782.42 00:27:28.533 clat percentiles (usec): 00:27:28.533 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10814], 00:27:28.533 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13960], 60.00th=[14877], 00:27:28.533 | 70.00th=[16057], 80.00th=[17171], 90.00th=[18744], 95.00th=[20055], 00:27:28.533 | 99.00th=[24511], 99.50th=[29230], 99.90th=[30278], 99.95th=[30278], 00:27:28.533 | 99.99th=[30278] 00:27:28.533 bw ( KiB/s): min=39008, max=58400, per=51.42%, avg=46696.00, stdev=8384.99, samples=4 00:27:28.533 iops : min= 2438, max= 3650, avg=2918.50, stdev=524.06, samples=4 00:27:28.533 write: IOPS=3268, BW=51.1MiB/s (53.6MB/s)(94.8MiB/1856msec); 0 zone resets 00:27:28.533 slat (usec): min=30, max=140, avg=33.57, stdev= 4.92 00:27:28.533 clat (usec): min=8286, max=33994, avg=15068.64, stdev=3492.47 00:27:28.533 lat (usec): min=8318, max=34029, avg=15102.22, stdev=3492.53 00:27:28.534 clat percentiles (usec): 00:27:28.534 | 1.00th=[ 8979], 5.00th=[10290], 10.00th=[10814], 20.00th=[11863], 00:27:28.534 | 30.00th=[12649], 40.00th=[13698], 50.00th=[14615], 60.00th=[15795], 00:27:28.534 | 70.00th=[16909], 80.00th=[17957], 90.00th=[19530], 95.00th=[20579], 00:27:28.534 | 99.00th=[25822], 99.50th=[28181], 99.90th=[29754], 99.95th=[30016], 00:27:28.534 | 99.99th=[33817] 00:27:28.534 bw ( KiB/s): min=40960, max=60704, per=92.80%, avg=48536.00, stdev=8589.26, samples=4 00:27:28.534 iops : min= 2560, max= 3794, avg=3033.50, stdev=536.83, samples=4 00:27:28.534 lat (msec) : 4=0.01%, 10=10.33%, 20=83.38%, 50=6.28% 00:27:28.534 cpu : usr=78.56%, sys=19.39%, ctx=19, majf=0, minf=1 00:27:28.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:28.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:28.534 issued rwts: total=11392,6067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:28.534 00:27:28.534 Run status group 0 (all jobs): 00:27:28.534 READ: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=178MiB (187MB), run=2007-2007msec 00:27:28.534 WRITE: bw=51.1MiB/s (53.6MB/s), 51.1MiB/s-51.1MiB/s (53.6MB/s-53.6MB/s), io=94.8MiB (99.4MB), run=1856-1856msec 00:27:28.534 17:19:44 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.534 17:19:44 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:28.534 17:19:44 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:28.534 17:19:44 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:28.534 17:19:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:28.534 17:19:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:28.534 17:19:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.534 17:19:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:28.534 17:19:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:28.534 17:19:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:28.534 17:19:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:28.534 17:19:44 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:31.824 Nvme0n1 00:27:31.824 17:19:47 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:35.110 17:19:50 -- host/fio.sh@53 -- # ls_guid=118b138a-fa7b-4cdd-95a5-9d604f3855c7 00:27:35.110 17:19:50 -- host/fio.sh@54 -- # get_lvs_free_mb 118b138a-fa7b-4cdd-95a5-9d604f3855c7 00:27:35.110 17:19:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=118b138a-fa7b-4cdd-95a5-9d604f3855c7 00:27:35.110 17:19:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:35.110 17:19:50 -- common/autotest_common.sh@1345 -- # local fc 00:27:35.110 17:19:50 -- common/autotest_common.sh@1346 -- # local cs 00:27:35.110 17:19:50 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:35.110 17:19:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:35.110 { 00:27:35.110 "uuid": "118b138a-fa7b-4cdd-95a5-9d604f3855c7", 00:27:35.110 "name": "lvs_0", 00:27:35.110 "base_bdev": "Nvme0n1", 00:27:35.110 "total_data_clusters": 930, 00:27:35.110 "free_clusters": 930, 00:27:35.110 "block_size": 512, 00:27:35.110 "cluster_size": 1073741824 00:27:35.110 } 00:27:35.110 ]' 00:27:35.110 17:19:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="118b138a-fa7b-4cdd-95a5-9d604f3855c7") .free_clusters' 00:27:35.110 17:19:50 -- common/autotest_common.sh@1348 -- # fc=930 00:27:35.110 17:19:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="118b138a-fa7b-4cdd-95a5-9d604f3855c7") .cluster_size' 00:27:35.110 17:19:50 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:35.110 17:19:50 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:35.110 17:19:50 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:35.110 952320 00:27:35.110 17:19:50 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:35.110 bbf7f41a-0058-476b-ad5a-b08ace007aa5 00:27:35.110 17:19:51 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:35.368 17:19:51 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:35.626 17:19:51 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:35.884 17:19:51 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:35.884 17:19:51 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:35.884 17:19:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:35.884 17:19:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:35.884 17:19:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:35.884 17:19:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:35.884 17:19:51 -- common/autotest_common.sh@1320 -- # shift 00:27:35.884 17:19:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:35.884 17:19:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.884 17:19:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:35.884 17:19:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:35.884 17:19:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:35.884 17:19:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:35.884 17:19:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:35.884 17:19:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:35.884 17:19:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:35.885 17:19:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:35.885 17:19:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:35.885 17:19:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:35.885 17:19:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:35.885 17:19:52 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:35.885 17:19:52 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:36.144 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:36.144 fio-3.35 00:27:36.144 Starting 1 thread 00:27:36.144 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.666 00:27:38.666 test: (groupid=0, jobs=1): err= 0: pid=640690: Sat Jul 20 17:19:54 2024 00:27:38.666 read: IOPS=5285, BW=20.6MiB/s (21.7MB/s)(41.4MiB/2005msec) 00:27:38.666 slat (nsec): min=1906, max=165307, avg=2560.96, stdev=2393.17 00:27:38.666 clat (msec): min=2, max=172, avg=14.13, stdev=12.51 00:27:38.666 lat (msec): min=2, max=172, avg=14.13, stdev=12.51 00:27:38.666 clat percentiles (msec): 00:27:38.666 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:27:38.666 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:27:38.666 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 18], 95.00th=[ 19], 00:27:38.666 | 99.00th=[ 22], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 174], 00:27:38.666 | 99.99th=[ 174] 00:27:38.666 bw ( KiB/s): min=15312, max=23808, per=99.43%, avg=21022.00, stdev=3957.73, samples=4 00:27:38.666 iops : min= 3828, max= 5952, avg=5255.50, stdev=989.43, samples=4 00:27:38.666 write: IOPS=5275, BW=20.6MiB/s (21.6MB/s)(41.3MiB/2005msec); 0 zone resets 00:27:38.666 slat (nsec): min=1997, max=125410, avg=2650.89, stdev=1884.87 00:27:38.666 clat (usec): min=957, max=170691, avg=9923.36, stdev=11708.54 00:27:38.666 lat (usec): min=960, max=170699, avg=9926.01, stdev=11708.88 00:27:38.666 clat percentiles (msec): 00:27:38.666 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:27:38.666 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 10], 00:27:38.666 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:27:38.666 | 99.00th=[ 14], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 171], 00:27:38.666 | 99.99th=[ 171] 00:27:38.666 bw ( KiB/s): min=16320, max=23520, per=99.87%, avg=21074.00, stdev=3320.38, samples=4 00:27:38.666 iops : min= 4080, max= 5880, avg=5268.50, stdev=830.09, samples=4 00:27:38.666 lat (usec) : 1000=0.01% 00:27:38.666 lat (msec) : 2=0.01%, 4=0.10%, 10=42.18%, 20=56.27%, 50=0.82% 00:27:38.666 lat (msec) : 250=0.60% 00:27:38.666 cpu : usr=59.73%, sys=34.58%, ctx=41, majf=0, minf=19 00:27:38.666 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:27:38.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:38.666 issued rwts: total=10598,10577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.666 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:38.666 00:27:38.666 Run status group 0 (all jobs): 00:27:38.666 READ: bw=20.6MiB/s (21.7MB/s), 20.6MiB/s-20.6MiB/s (21.7MB/s-21.7MB/s), io=41.4MiB (43.4MB), run=2005-2005msec 00:27:38.666 WRITE: bw=20.6MiB/s (21.6MB/s), 20.6MiB/s-20.6MiB/s (21.6MB/s-21.6MB/s), io=41.3MiB (43.3MB), run=2005-2005msec 00:27:38.666 17:19:54 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:38.666 17:19:54 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:40.042 17:19:55 -- host/fio.sh@64 -- # ls_nested_guid=0ab23c76-50d4-419e-a8bc-2acda3fcd0e0 00:27:40.042 17:19:55 -- host/fio.sh@65 -- # get_lvs_free_mb 0ab23c76-50d4-419e-a8bc-2acda3fcd0e0 00:27:40.042 17:19:55 -- common/autotest_common.sh@1343 -- # local lvs_uuid=0ab23c76-50d4-419e-a8bc-2acda3fcd0e0 00:27:40.042 17:19:55 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:40.042 17:19:55 -- common/autotest_common.sh@1345 -- # local fc 00:27:40.042 17:19:55 -- common/autotest_common.sh@1346 -- # local cs 00:27:40.042 17:19:55 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:40.042 17:19:56 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:40.042 { 00:27:40.042 "uuid": "118b138a-fa7b-4cdd-95a5-9d604f3855c7", 00:27:40.042 "name": "lvs_0", 00:27:40.042 "base_bdev": "Nvme0n1", 00:27:40.042 "total_data_clusters": 930, 00:27:40.042 "free_clusters": 0, 00:27:40.042 "block_size": 512, 00:27:40.042 "cluster_size": 1073741824 00:27:40.042 }, 00:27:40.042 { 00:27:40.042 "uuid": "0ab23c76-50d4-419e-a8bc-2acda3fcd0e0", 00:27:40.042 "name": "lvs_n_0", 00:27:40.042 "base_bdev": "bbf7f41a-0058-476b-ad5a-b08ace007aa5", 00:27:40.042 "total_data_clusters": 237847, 00:27:40.042 "free_clusters": 237847, 00:27:40.042 "block_size": 512, 00:27:40.042 "cluster_size": 4194304 00:27:40.042 } 00:27:40.042 ]' 00:27:40.042 17:19:56 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="0ab23c76-50d4-419e-a8bc-2acda3fcd0e0") .free_clusters' 00:27:40.304 17:19:56 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:40.304 17:19:56 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="0ab23c76-50d4-419e-a8bc-2acda3fcd0e0") .cluster_size' 00:27:40.304 17:19:56 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:40.304 17:19:56 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:40.304 17:19:56 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:40.304 951388 00:27:40.304 17:19:56 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:40.865 034dfd14-859e-4135-b76e-76917f6b419b 00:27:40.865 17:19:56 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:41.121 17:19:57 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:41.377 17:19:57 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:41.635 17:19:57 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:41.635 17:19:57 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:41.635 17:19:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:41.635 17:19:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:41.635 17:19:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:41.635 17:19:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:41.635 17:19:57 -- common/autotest_common.sh@1320 -- # shift 00:27:41.635 17:19:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:41.635 17:19:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:41.635 17:19:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:41.635 17:19:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:41.635 17:19:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:41.635 17:19:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:41.635 17:19:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:41.635 17:19:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:41.892 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:41.892 fio-3.35 00:27:41.892 Starting 1 thread 00:27:41.892 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.418 00:27:44.418 test: (groupid=0, jobs=1): err= 0: pid=641446: Sat Jul 20 17:20:00 2024 00:27:44.418 read: IOPS=6023, BW=23.5MiB/s (24.7MB/s)(47.3MiB/2009msec) 00:27:44.418 slat (nsec): min=1929, max=177666, avg=2534.38, stdev=2305.75 00:27:44.418 clat (usec): min=6022, max=17410, avg=11812.20, stdev=1136.55 00:27:44.418 lat (usec): min=6027, max=17412, avg=11814.74, stdev=1136.48 00:27:44.418 clat percentiles (usec): 00:27:44.418 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:27:44.418 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:27:44.418 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:27:44.418 | 99.00th=[14484], 99.50th=[14746], 99.90th=[16450], 99.95th=[17171], 00:27:44.418 | 99.99th=[17433] 00:27:44.418 bw ( KiB/s): min=22400, max=25008, per=99.89%, avg=24066.00, stdev=1144.61, samples=4 00:27:44.418 iops : min= 5600, max= 6252, avg=6016.50, stdev=286.15, samples=4 00:27:44.418 write: IOPS=6004, BW=23.5MiB/s (24.6MB/s)(47.1MiB/2009msec); 0 zone resets 00:27:44.418 slat (usec): min=2, max=119, avg= 2.67, stdev= 1.69 00:27:44.418 clat (usec): min=2996, max=16490, avg=9266.37, stdev=1026.63 00:27:44.418 lat (usec): min=3003, max=16492, avg=9269.04, stdev=1026.66 00:27:44.418 clat percentiles (usec): 00:27:44.418 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8455], 00:27:44.418 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:27:44.418 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10552], 95.00th=[10945], 00:27:44.418 | 99.00th=[11600], 99.50th=[12125], 99.90th=[14222], 99.95th=[16319], 00:27:44.418 | 99.99th=[16450] 00:27:44.418 bw ( KiB/s): min=23320, max=24472, per=99.96%, avg=24010.00, stdev=498.76, samples=4 00:27:44.418 iops : min= 5830, max= 6118, avg=6002.50, stdev=124.69, samples=4 00:27:44.418 lat (msec) : 4=0.02%, 10=41.65%, 20=58.33% 00:27:44.418 cpu : usr=51.89%, sys=39.34%, ctx=65, majf=0, minf=19 00:27:44.418 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:44.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:44.418 issued rwts: total=12101,12064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.418 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:44.418 00:27:44.418 Run status group 0 (all jobs): 00:27:44.418 READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2009-2009msec 00:27:44.418 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2009-2009msec 00:27:44.418 17:20:00 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:44.418 17:20:00 -- host/fio.sh@74 -- # sync 00:27:44.418 17:20:00 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:48.590 17:20:04 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:48.590 17:20:04 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:51.883 17:20:07 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:51.883 17:20:07 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:53.780 17:20:09 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:53.780 17:20:09 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:53.780 17:20:09 -- host/fio.sh@86 -- # nvmftestfini 00:27:53.780 17:20:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:53.780 17:20:09 -- nvmf/common.sh@116 -- # sync 00:27:53.780 17:20:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:53.780 17:20:09 -- nvmf/common.sh@119 -- # set +e 00:27:53.780 17:20:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:53.780 17:20:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:53.780 rmmod nvme_tcp 00:27:53.780 rmmod nvme_fabrics 00:27:53.780 rmmod nvme_keyring 00:27:53.780 17:20:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:53.780 17:20:09 -- nvmf/common.sh@123 -- # set -e 00:27:53.780 17:20:09 -- nvmf/common.sh@124 -- # return 0 00:27:53.780 17:20:09 -- nvmf/common.sh@477 -- # '[' -n 638536 ']' 00:27:53.780 17:20:09 -- nvmf/common.sh@478 -- # killprocess 638536 00:27:53.780 17:20:09 -- common/autotest_common.sh@926 -- # '[' -z 638536 ']' 00:27:53.780 17:20:09 -- common/autotest_common.sh@930 -- # kill -0 638536 00:27:53.780 17:20:09 -- common/autotest_common.sh@931 -- # uname 00:27:53.780 17:20:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:53.780 17:20:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 638536 00:27:53.780 17:20:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:53.780 17:20:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:53.780 17:20:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 638536' 00:27:53.780 killing process with pid 638536 00:27:53.780 17:20:09 -- common/autotest_common.sh@945 -- # kill 638536 00:27:53.780 17:20:09 -- common/autotest_common.sh@950 -- # wait 638536 00:27:53.780 17:20:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:53.780 17:20:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:53.780 17:20:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:53.780 17:20:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.780 17:20:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:53.780 17:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.780 17:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:53.780 17:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.308 17:20:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:56.308 00:27:56.308 real 0m37.459s 00:27:56.308 user 2m24.182s 00:27:56.308 sys 0m6.613s 00:27:56.308 17:20:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.308 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:27:56.308 ************************************ 00:27:56.308 END TEST nvmf_fio_host 00:27:56.308 ************************************ 00:27:56.308 17:20:11 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:56.308 17:20:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:56.308 17:20:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:56.308 17:20:11 -- common/autotest_common.sh@10 -- # set +x 00:27:56.308 ************************************ 00:27:56.308 START TEST nvmf_failover 00:27:56.308 ************************************ 00:27:56.308 17:20:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:56.308 * Looking for test storage... 00:27:56.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.308 17:20:12 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.308 17:20:12 -- nvmf/common.sh@7 -- # uname -s 00:27:56.308 17:20:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.308 17:20:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.308 17:20:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.308 17:20:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.308 17:20:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.308 17:20:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.308 17:20:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.308 17:20:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.308 17:20:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.308 17:20:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.308 17:20:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.308 17:20:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.308 17:20:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.308 17:20:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.308 17:20:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.308 17:20:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.308 17:20:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.308 17:20:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.308 17:20:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.308 17:20:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.308 17:20:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.308 17:20:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.308 17:20:12 -- paths/export.sh@5 -- # export PATH 00:27:56.308 17:20:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.308 17:20:12 -- nvmf/common.sh@46 -- # : 0 00:27:56.308 17:20:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:56.308 17:20:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:56.308 17:20:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:56.308 17:20:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.308 17:20:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.308 17:20:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:56.308 17:20:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:56.308 17:20:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:56.308 17:20:12 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.308 17:20:12 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.308 17:20:12 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:56.308 17:20:12 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:56.308 17:20:12 -- host/failover.sh@18 -- # nvmftestinit 00:27:56.308 17:20:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:56.308 17:20:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.308 17:20:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:56.308 17:20:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:56.308 17:20:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:56.308 17:20:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.308 17:20:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.308 17:20:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.308 17:20:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:56.308 17:20:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:56.308 17:20:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:56.308 17:20:12 -- common/autotest_common.sh@10 -- # set +x 00:27:58.206 17:20:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:58.206 17:20:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:58.206 17:20:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:58.206 17:20:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:58.206 17:20:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:58.206 17:20:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:58.206 17:20:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:58.206 17:20:14 -- nvmf/common.sh@294 -- # net_devs=() 00:27:58.206 17:20:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:58.206 17:20:14 -- nvmf/common.sh@295 -- # e810=() 00:27:58.206 17:20:14 -- nvmf/common.sh@295 -- # local -ga e810 00:27:58.206 17:20:14 -- nvmf/common.sh@296 -- # x722=() 00:27:58.206 17:20:14 -- nvmf/common.sh@296 -- # local -ga x722 00:27:58.206 17:20:14 -- nvmf/common.sh@297 -- # mlx=() 00:27:58.206 17:20:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:58.206 17:20:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.206 17:20:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:58.206 17:20:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:58.206 17:20:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:58.206 17:20:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:58.206 17:20:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.206 17:20:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:58.206 17:20:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.206 17:20:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:58.206 17:20:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:58.206 17:20:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.206 17:20:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:58.206 17:20:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.206 17:20:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.206 17:20:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.206 17:20:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:58.206 17:20:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.206 17:20:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:58.206 17:20:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.206 17:20:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.206 17:20:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.206 17:20:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:58.206 17:20:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:58.206 17:20:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:58.206 17:20:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.206 17:20:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.206 17:20:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.206 17:20:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:58.206 17:20:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.206 17:20:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.206 17:20:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:58.206 17:20:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.206 17:20:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.206 17:20:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:58.206 17:20:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:58.206 17:20:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.206 17:20:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.206 17:20:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.206 17:20:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.206 17:20:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:58.206 17:20:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.206 17:20:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.206 17:20:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.206 17:20:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:58.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:27:58.206 00:27:58.206 --- 10.0.0.2 ping statistics --- 00:27:58.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.206 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:58.206 17:20:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:27:58.206 00:27:58.206 --- 10.0.0.1 ping statistics --- 00:27:58.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.206 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:27:58.206 17:20:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.206 17:20:14 -- nvmf/common.sh@410 -- # return 0 00:27:58.206 17:20:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:58.206 17:20:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.206 17:20:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:58.206 17:20:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.206 17:20:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:58.206 17:20:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:58.207 17:20:14 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:58.207 17:20:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:58.207 17:20:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:58.207 17:20:14 -- common/autotest_common.sh@10 -- # set +x 00:27:58.207 17:20:14 -- nvmf/common.sh@469 -- # nvmfpid=644863 00:27:58.207 17:20:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:58.207 17:20:14 -- nvmf/common.sh@470 -- # waitforlisten 644863 00:27:58.207 17:20:14 -- common/autotest_common.sh@819 -- # '[' -z 644863 ']' 00:27:58.207 17:20:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.207 17:20:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:58.207 17:20:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.207 17:20:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:58.207 17:20:14 -- common/autotest_common.sh@10 -- # set +x 00:27:58.207 [2024-07-20 17:20:14.315652] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:58.207 [2024-07-20 17:20:14.315727] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.207 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.465 [2024-07-20 17:20:14.385928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:58.465 [2024-07-20 17:20:14.474609] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:58.465 [2024-07-20 17:20:14.474779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.465 [2024-07-20 17:20:14.474808] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.465 [2024-07-20 17:20:14.474825] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.465 [2024-07-20 17:20:14.474931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.465 [2024-07-20 17:20:14.474994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.465 [2024-07-20 17:20:14.474997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.397 17:20:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:59.397 17:20:15 -- common/autotest_common.sh@852 -- # return 0 00:27:59.397 17:20:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:59.397 17:20:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:59.397 17:20:15 -- common/autotest_common.sh@10 -- # set +x 00:27:59.397 17:20:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.397 17:20:15 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:59.397 [2024-07-20 17:20:15.477825] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.397 17:20:15 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:59.655 Malloc0 00:27:59.655 17:20:15 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.913 17:20:15 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:00.170 17:20:16 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.427 [2024-07-20 17:20:16.466448] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.427 17:20:16 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:00.684 [2024-07-20 17:20:16.711226] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:00.684 17:20:16 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:00.942 [2024-07-20 17:20:16.939983] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:00.942 17:20:16 -- host/failover.sh@31 -- # bdevperf_pid=645176 00:28:00.942 17:20:16 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:00.942 17:20:16 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:00.942 17:20:16 -- host/failover.sh@34 -- # waitforlisten 645176 /var/tmp/bdevperf.sock 00:28:00.942 17:20:16 -- common/autotest_common.sh@819 -- # '[' -z 645176 ']' 00:28:00.942 17:20:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:00.942 17:20:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:00.942 17:20:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:00.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:00.942 17:20:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:00.942 17:20:16 -- common/autotest_common.sh@10 -- # set +x 00:28:01.873 17:20:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:01.873 17:20:17 -- common/autotest_common.sh@852 -- # return 0 00:28:01.873 17:20:17 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.130 NVMe0n1 00:28:02.130 17:20:18 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.694 00:28:02.694 17:20:18 -- host/failover.sh@39 -- # run_test_pid=645358 00:28:02.694 17:20:18 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:02.694 17:20:18 -- host/failover.sh@41 -- # sleep 1 00:28:03.626 17:20:19 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:03.884 [2024-07-20 17:20:19.928076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.928991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 [2024-07-20 17:20:19.929093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121e540 is same with the state(5) to be set 00:28:03.884 17:20:19 -- host/failover.sh@45 -- # sleep 3 00:28:07.173 17:20:22 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.429 00:28:07.429 17:20:23 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:07.686 [2024-07-20 17:20:23.646750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.646989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 [2024-07-20 17:20:23.647389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121f3d0 is same with the state(5) to be set 00:28:07.686 17:20:23 -- host/failover.sh@50 -- # sleep 3 00:28:10.960 17:20:26 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.960 [2024-07-20 17:20:26.937443] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.960 17:20:26 -- host/failover.sh@55 -- # sleep 1 00:28:11.893 17:20:27 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:12.151 [2024-07-20 17:20:28.194203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.151 [2024-07-20 17:20:28.194742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.194989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 [2024-07-20 17:20:28.195098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ff40 is same with the state(5) to be set 00:28:12.152 17:20:28 -- host/failover.sh@59 -- # wait 645358 00:28:18.712 0 00:28:18.712 17:20:33 -- host/failover.sh@61 -- # killprocess 645176 00:28:18.712 17:20:33 -- common/autotest_common.sh@926 -- # '[' -z 645176 ']' 00:28:18.712 17:20:33 -- common/autotest_common.sh@930 -- # kill -0 645176 00:28:18.712 17:20:33 -- common/autotest_common.sh@931 -- # uname 00:28:18.712 17:20:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:18.712 17:20:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 645176 00:28:18.712 17:20:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:18.712 17:20:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:18.712 17:20:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 645176' 00:28:18.712 killing process with pid 645176 00:28:18.712 17:20:33 -- common/autotest_common.sh@945 -- # kill 645176 00:28:18.712 17:20:33 -- common/autotest_common.sh@950 -- # wait 645176 00:28:18.712 17:20:34 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:18.712 [2024-07-20 17:20:16.996518] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:18.712 [2024-07-20 17:20:16.996620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645176 ] 00:28:18.712 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.712 [2024-07-20 17:20:17.057000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.712 [2024-07-20 17:20:17.141219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.712 Running I/O for 15 seconds... 00:28:18.712 [2024-07-20 17:20:19.929510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.929975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.929988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.712 [2024-07-20 17:20:19.930374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.712 [2024-07-20 17:20:19.930389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.930888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.930948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.930977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.930992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.713 [2024-07-20 17:20:19.931490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.713 [2024-07-20 17:20:19.931679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.713 [2024-07-20 17:20:19.931693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.931722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.931751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.931783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.931822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.931852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.931881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.931910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.931939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.931967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.931982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.931996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.932924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.932970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.714 [2024-07-20 17:20:19.932985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.714 [2024-07-20 17:20:19.933001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.714 [2024-07-20 17:20:19.933015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.715 [2024-07-20 17:20:19.933044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.715 [2024-07-20 17:20:19.933102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:19.933366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d4320 is same with the state(5) to be set 00:28:18.715 [2024-07-20 17:20:19.933398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.715 [2024-07-20 17:20:19.933409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.715 [2024-07-20 17:20:19.933426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115624 len:8 PRP1 0x0 PRP2 0x0 00:28:18.715 [2024-07-20 17:20:19.933439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933500] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19d4320 was disconnected and freed. reset controller. 00:28:18.715 [2024-07-20 17:20:19.933528] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:18.715 [2024-07-20 17:20:19.933563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:19.933582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:19.933610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:19.933637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:19.933663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:19.933677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.715 [2024-07-20 17:20:19.935959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.715 [2024-07-20 17:20:19.935996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b5790 (9): Bad file descriptor 00:28:18.715 [2024-07-20 17:20:20.004647] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.715 [2024-07-20 17:20:23.647309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:23.647353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:23.647400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:23.647429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.715 [2024-07-20 17:20:23.647456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647469] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b5790 is same with the state(5) to be set 00:28:18.715 [2024-07-20 17:20:23.647646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.647976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.647990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.648050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.715 [2024-07-20 17:20:23.648080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.648109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.715 [2024-07-20 17:20:23.648138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.715 [2024-07-20 17:20:23.648167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.715 [2024-07-20 17:20:23.648182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.715 [2024-07-20 17:20:23.648196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.648939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.648984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.648998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.649028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.649058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.649101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.649135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.649166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.716 [2024-07-20 17:20:23.649194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.649223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.649252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.716 [2024-07-20 17:20:23.649266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.716 [2024-07-20 17:20:23.649280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.649512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.649541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.649569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.649655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.649711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.649768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.649977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.649993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.717 [2024-07-20 17:20:23.650632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.717 [2024-07-20 17:20:23.650665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.717 [2024-07-20 17:20:23.650680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.650694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.650722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.650750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.650779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.650836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.650866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.650895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.650925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.650954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.650983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.650998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.651154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.651183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.651239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.651296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.718 [2024-07-20 17:20:23.651324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:23.651524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1cb0 is same with the state(5) to be set 00:28:18.718 [2024-07-20 17:20:23.651555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.718 [2024-07-20 17:20:23.651571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.718 [2024-07-20 17:20:23.651582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:576 len:8 PRP1 0x0 PRP2 0x0 00:28:18.718 [2024-07-20 17:20:23.651595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:23.651667] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19c1cb0 was disconnected and freed. reset controller. 00:28:18.718 [2024-07-20 17:20:23.651685] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:18.718 [2024-07-20 17:20:23.651702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.718 [2024-07-20 17:20:23.653921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.718 [2024-07-20 17:20:23.653961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b5790 (9): Bad file descriptor 00:28:18.718 [2024-07-20 17:20:23.815303] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.718 [2024-07-20 17:20:28.195393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.718 [2024-07-20 17:20:28.195768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.718 [2024-07-20 17:20:28.195782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.195819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.195839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.195853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.195869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.195883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.195898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.195912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.195927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.195941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.195957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.195971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.195991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:41912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.719 [2024-07-20 17:20:28.196916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.196975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.196990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.197004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.197019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.197033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.197049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.197062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.197077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.197106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.197122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.719 [2024-07-20 17:20:28.197135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.719 [2024-07-20 17:20:28.197154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.720 [2024-07-20 17:20:28.197309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.720 [2024-07-20 17:20:28.197764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.197984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.197998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.720 [2024-07-20 17:20:28.198027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.720 [2024-07-20 17:20:28.198057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.720 [2024-07-20 17:20:28.198085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.198115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.198144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.198173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.198203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.198232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.720 [2024-07-20 17:20:28.198265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.720 [2024-07-20 17:20:28.198281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.198948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.198977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.198992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.199006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:18.721 [2024-07-20 17:20:28.199040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.721 [2024-07-20 17:20:28.199273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d83d0 is same with the state(5) to be set 00:28:18.721 [2024-07-20 17:20:28.199305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:18.721 [2024-07-20 17:20:28.199316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:18.721 [2024-07-20 17:20:28.199334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41816 len:8 PRP1 0x0 PRP2 0x0 00:28:18.721 [2024-07-20 17:20:28.199347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199414] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19d83d0 was disconnected and freed. reset controller. 00:28:18.721 [2024-07-20 17:20:28.199434] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:18.721 [2024-07-20 17:20:28.199468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.721 [2024-07-20 17:20:28.199492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.721 [2024-07-20 17:20:28.199529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.721 [2024-07-20 17:20:28.199556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.721 [2024-07-20 17:20:28.199584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.721 [2024-07-20 17:20:28.199597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.721 [2024-07-20 17:20:28.201690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.721 [2024-07-20 17:20:28.201731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b5790 (9): Bad file descriptor 00:28:18.721 [2024-07-20 17:20:28.234392] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.721 00:28:18.721 Latency(us) 00:28:18.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.721 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:18.721 Verification LBA range: start 0x0 length 0x4000 00:28:18.721 NVMe0n1 : 15.01 11932.35 46.61 1046.84 0.00 9845.39 952.70 16019.91 00:28:18.721 =================================================================================================================== 00:28:18.722 Total : 11932.35 46.61 1046.84 0.00 9845.39 952.70 16019.91 00:28:18.722 Received shutdown signal, test time was about 15.000000 seconds 00:28:18.722 00:28:18.722 Latency(us) 00:28:18.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.722 =================================================================================================================== 00:28:18.722 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.722 17:20:34 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:18.722 17:20:34 -- host/failover.sh@65 -- # count=3 00:28:18.722 17:20:34 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:18.722 17:20:34 -- host/failover.sh@73 -- # bdevperf_pid=647234 00:28:18.722 17:20:34 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:18.722 17:20:34 -- host/failover.sh@75 -- # waitforlisten 647234 /var/tmp/bdevperf.sock 00:28:18.722 17:20:34 -- common/autotest_common.sh@819 -- # '[' -z 647234 ']' 00:28:18.722 17:20:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:18.722 17:20:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:18.722 17:20:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:18.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:18.722 17:20:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:18.722 17:20:34 -- common/autotest_common.sh@10 -- # set +x 00:28:18.979 17:20:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:18.979 17:20:35 -- common/autotest_common.sh@852 -- # return 0 00:28:18.979 17:20:35 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:19.236 [2024-07-20 17:20:35.289342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:19.236 17:20:35 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:19.493 [2024-07-20 17:20:35.525997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:19.493 17:20:35 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.057 NVMe0n1 00:28:20.057 17:20:35 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.314 00:28:20.314 17:20:36 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.879 00:28:20.879 17:20:36 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.879 17:20:36 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:21.136 17:20:37 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.393 17:20:37 -- host/failover.sh@87 -- # sleep 3 00:28:24.672 17:20:40 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:24.672 17:20:40 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:24.672 17:20:40 -- host/failover.sh@90 -- # run_test_pid=648049 00:28:24.672 17:20:40 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:24.672 17:20:40 -- host/failover.sh@92 -- # wait 648049 00:28:25.607 0 00:28:25.607 17:20:41 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:25.607 [2024-07-20 17:20:34.149499] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:25.607 [2024-07-20 17:20:34.149592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647234 ] 00:28:25.607 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.607 [2024-07-20 17:20:34.209822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.607 [2024-07-20 17:20:34.290764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.607 [2024-07-20 17:20:37.320104] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:25.607 [2024-07-20 17:20:37.320190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.607 [2024-07-20 17:20:37.320213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.608 [2024-07-20 17:20:37.320245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.608 [2024-07-20 17:20:37.320260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.608 [2024-07-20 17:20:37.320274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.608 [2024-07-20 17:20:37.320288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.608 [2024-07-20 17:20:37.320302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.608 [2024-07-20 17:20:37.320317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.608 [2024-07-20 17:20:37.320331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.608 [2024-07-20 17:20:37.320368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.608 [2024-07-20 17:20:37.320400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f5790 (9): Bad file descriptor 00:28:25.608 [2024-07-20 17:20:37.330106] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:25.608 Running I/O for 1 seconds... 00:28:25.608 00:28:25.608 Latency(us) 00:28:25.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.608 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:25.608 Verification LBA range: start 0x0 length 0x4000 00:28:25.608 NVMe0n1 : 1.01 12871.32 50.28 0.00 0.00 9899.42 1638.40 13495.56 00:28:25.608 =================================================================================================================== 00:28:25.608 Total : 12871.32 50.28 0.00 0.00 9899.42 1638.40 13495.56 00:28:25.608 17:20:41 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:25.608 17:20:41 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:25.864 17:20:41 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.121 17:20:42 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.121 17:20:42 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:26.377 17:20:42 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.635 17:20:42 -- host/failover.sh@101 -- # sleep 3 00:28:29.906 17:20:45 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.906 17:20:45 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:29.906 17:20:45 -- host/failover.sh@108 -- # killprocess 647234 00:28:29.906 17:20:45 -- common/autotest_common.sh@926 -- # '[' -z 647234 ']' 00:28:29.906 17:20:45 -- common/autotest_common.sh@930 -- # kill -0 647234 00:28:29.906 17:20:45 -- common/autotest_common.sh@931 -- # uname 00:28:29.906 17:20:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:29.906 17:20:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 647234 00:28:29.906 17:20:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:29.906 17:20:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:29.906 17:20:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 647234' 00:28:29.906 killing process with pid 647234 00:28:29.907 17:20:45 -- common/autotest_common.sh@945 -- # kill 647234 00:28:29.907 17:20:45 -- common/autotest_common.sh@950 -- # wait 647234 00:28:30.164 17:20:46 -- host/failover.sh@110 -- # sync 00:28:30.164 17:20:46 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:30.423 17:20:46 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:30.423 17:20:46 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:30.423 17:20:46 -- host/failover.sh@116 -- # nvmftestfini 00:28:30.423 17:20:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:30.423 17:20:46 -- nvmf/common.sh@116 -- # sync 00:28:30.423 17:20:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:30.423 17:20:46 -- nvmf/common.sh@119 -- # set +e 00:28:30.423 17:20:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:30.423 17:20:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:30.423 rmmod nvme_tcp 00:28:30.423 rmmod nvme_fabrics 00:28:30.423 rmmod nvme_keyring 00:28:30.423 17:20:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:30.423 17:20:46 -- nvmf/common.sh@123 -- # set -e 00:28:30.423 17:20:46 -- nvmf/common.sh@124 -- # return 0 00:28:30.423 17:20:46 -- nvmf/common.sh@477 -- # '[' -n 644863 ']' 00:28:30.423 17:20:46 -- nvmf/common.sh@478 -- # killprocess 644863 00:28:30.423 17:20:46 -- common/autotest_common.sh@926 -- # '[' -z 644863 ']' 00:28:30.423 17:20:46 -- common/autotest_common.sh@930 -- # kill -0 644863 00:28:30.423 17:20:46 -- common/autotest_common.sh@931 -- # uname 00:28:30.423 17:20:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:30.423 17:20:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 644863 00:28:30.423 17:20:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:30.423 17:20:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:30.423 17:20:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 644863' 00:28:30.423 killing process with pid 644863 00:28:30.423 17:20:46 -- common/autotest_common.sh@945 -- # kill 644863 00:28:30.423 17:20:46 -- common/autotest_common.sh@950 -- # wait 644863 00:28:30.681 17:20:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:30.681 17:20:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:30.681 17:20:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:30.681 17:20:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:30.681 17:20:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:30.681 17:20:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.681 17:20:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.681 17:20:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.232 17:20:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:33.232 00:28:33.232 real 0m36.782s 00:28:33.232 user 2m9.684s 00:28:33.232 sys 0m6.090s 00:28:33.232 17:20:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.232 17:20:48 -- common/autotest_common.sh@10 -- # set +x 00:28:33.232 ************************************ 00:28:33.232 END TEST nvmf_failover 00:28:33.232 ************************************ 00:28:33.232 17:20:48 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:33.232 17:20:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:33.232 17:20:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:33.232 17:20:48 -- common/autotest_common.sh@10 -- # set +x 00:28:33.232 ************************************ 00:28:33.232 START TEST nvmf_discovery 00:28:33.232 ************************************ 00:28:33.232 17:20:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:33.232 * Looking for test storage... 00:28:33.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.232 17:20:48 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.232 17:20:48 -- nvmf/common.sh@7 -- # uname -s 00:28:33.232 17:20:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.232 17:20:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.232 17:20:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.232 17:20:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.232 17:20:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.232 17:20:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.232 17:20:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.232 17:20:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.232 17:20:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.232 17:20:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.232 17:20:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.232 17:20:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:33.232 17:20:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.232 17:20:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.232 17:20:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.232 17:20:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.232 17:20:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.232 17:20:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.232 17:20:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.232 17:20:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.232 17:20:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.232 17:20:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.232 17:20:48 -- paths/export.sh@5 -- # export PATH 00:28:33.232 17:20:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.232 17:20:48 -- nvmf/common.sh@46 -- # : 0 00:28:33.232 17:20:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:33.232 17:20:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:33.232 17:20:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:33.232 17:20:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.232 17:20:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.232 17:20:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:33.232 17:20:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:33.232 17:20:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:33.232 17:20:48 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:33.232 17:20:48 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:33.232 17:20:48 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:33.232 17:20:48 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:33.232 17:20:48 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:33.232 17:20:48 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:33.232 17:20:48 -- host/discovery.sh@25 -- # nvmftestinit 00:28:33.232 17:20:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:33.232 17:20:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.232 17:20:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:33.232 17:20:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:33.232 17:20:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:33.232 17:20:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.232 17:20:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:33.232 17:20:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.232 17:20:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:33.232 17:20:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:33.232 17:20:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:33.232 17:20:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.136 17:20:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:35.136 17:20:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:35.136 17:20:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:35.136 17:20:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:35.136 17:20:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:35.136 17:20:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:35.136 17:20:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:35.136 17:20:50 -- nvmf/common.sh@294 -- # net_devs=() 00:28:35.136 17:20:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:35.136 17:20:50 -- nvmf/common.sh@295 -- # e810=() 00:28:35.136 17:20:50 -- nvmf/common.sh@295 -- # local -ga e810 00:28:35.136 17:20:50 -- nvmf/common.sh@296 -- # x722=() 00:28:35.136 17:20:50 -- nvmf/common.sh@296 -- # local -ga x722 00:28:35.136 17:20:50 -- nvmf/common.sh@297 -- # mlx=() 00:28:35.136 17:20:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:35.136 17:20:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:35.136 17:20:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:35.136 17:20:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:35.136 17:20:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:35.136 17:20:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.136 17:20:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:35.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:35.136 17:20:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:35.136 17:20:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:35.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:35.136 17:20:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:35.136 17:20:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.136 17:20:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.136 17:20:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.136 17:20:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.136 17:20:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:35.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:35.136 17:20:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.136 17:20:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:35.136 17:20:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:35.136 17:20:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:35.136 17:20:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:35.136 17:20:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:35.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:35.136 17:20:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:35.136 17:20:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:35.136 17:20:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:35.136 17:20:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:35.136 17:20:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:35.137 17:20:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:35.137 17:20:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:35.137 17:20:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:35.137 17:20:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:35.137 17:20:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:35.137 17:20:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:35.137 17:20:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:35.137 17:20:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:35.137 17:20:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:35.137 17:20:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:35.137 17:20:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:35.137 17:20:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:35.137 17:20:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:35.137 17:20:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:35.137 17:20:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:35.137 17:20:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:35.137 17:20:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:35.137 17:20:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.137 17:20:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.137 17:20:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:35.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:28:35.137 00:28:35.137 --- 10.0.0.2 ping statistics --- 00:28:35.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.137 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:28:35.137 17:20:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:35.137 00:28:35.137 --- 10.0.0.1 ping statistics --- 00:28:35.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.137 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:35.137 17:20:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.137 17:20:51 -- nvmf/common.sh@410 -- # return 0 00:28:35.137 17:20:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:35.137 17:20:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.137 17:20:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:35.137 17:20:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:35.137 17:20:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.137 17:20:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:35.137 17:20:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:35.137 17:20:51 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:35.137 17:20:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:35.137 17:20:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:35.137 17:20:51 -- common/autotest_common.sh@10 -- # set +x 00:28:35.137 17:20:51 -- nvmf/common.sh@469 -- # nvmfpid=650692 00:28:35.137 17:20:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:35.137 17:20:51 -- nvmf/common.sh@470 -- # waitforlisten 650692 00:28:35.137 17:20:51 -- common/autotest_common.sh@819 -- # '[' -z 650692 ']' 00:28:35.137 17:20:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:35.137 17:20:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.137 17:20:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:35.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:35.137 17:20:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.137 17:20:51 -- common/autotest_common.sh@10 -- # set +x 00:28:35.137 [2024-07-20 17:20:51.093577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:35.137 [2024-07-20 17:20:51.093654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:35.137 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.137 [2024-07-20 17:20:51.156520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.137 [2024-07-20 17:20:51.238707] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:35.137 [2024-07-20 17:20:51.238884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.137 [2024-07-20 17:20:51.238904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.137 [2024-07-20 17:20:51.238918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.137 [2024-07-20 17:20:51.238947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.069 17:20:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:36.069 17:20:52 -- common/autotest_common.sh@852 -- # return 0 00:28:36.069 17:20:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:36.069 17:20:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 17:20:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:36.069 17:20:52 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:36.069 17:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 [2024-07-20 17:20:52.083559] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:36.069 17:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.069 17:20:52 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:36.069 17:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 [2024-07-20 17:20:52.091719] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:36.069 17:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.069 17:20:52 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:36.069 17:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 null0 00:28:36.069 17:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.069 17:20:52 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:36.069 17:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 null1 00:28:36.069 17:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.069 17:20:52 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:36.069 17:20:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 17:20:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.069 17:20:52 -- host/discovery.sh@45 -- # hostpid=650846 00:28:36.069 17:20:52 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:36.069 17:20:52 -- host/discovery.sh@46 -- # waitforlisten 650846 /tmp/host.sock 00:28:36.069 17:20:52 -- common/autotest_common.sh@819 -- # '[' -z 650846 ']' 00:28:36.069 17:20:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:36.069 17:20:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:36.069 17:20:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:36.069 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:36.069 17:20:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:36.069 17:20:52 -- common/autotest_common.sh@10 -- # set +x 00:28:36.069 [2024-07-20 17:20:52.162315] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:36.070 [2024-07-20 17:20:52.162393] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650846 ] 00:28:36.070 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.070 [2024-07-20 17:20:52.224268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.327 [2024-07-20 17:20:52.312089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:36.327 [2024-07-20 17:20:52.312261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.259 17:20:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:37.259 17:20:53 -- common/autotest_common.sh@852 -- # return 0 00:28:37.259 17:20:53 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:37.259 17:20:53 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@72 -- # notify_id=0 00:28:37.259 17:20:53 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@79 -- # get_bdev_list 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # sort 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@83 -- # get_bdev_list 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@87 -- # get_bdev_list 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 [2024-07-20 17:20:53.343214] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@59 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.259 17:20:53 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:37.259 17:20:53 -- host/discovery.sh@93 -- # get_bdev_list 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.259 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.259 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # sort 00:28:37.259 17:20:53 -- host/discovery.sh@55 -- # xargs 00:28:37.259 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.515 17:20:53 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:37.515 17:20:53 -- host/discovery.sh@94 -- # get_notification_count 00:28:37.515 17:20:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:37.515 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.515 17:20:53 -- host/discovery.sh@74 -- # jq '. | length' 00:28:37.515 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.515 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.515 17:20:53 -- host/discovery.sh@74 -- # notification_count=0 00:28:37.515 17:20:53 -- host/discovery.sh@75 -- # notify_id=0 00:28:37.515 17:20:53 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:37.515 17:20:53 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:37.515 17:20:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.515 17:20:53 -- common/autotest_common.sh@10 -- # set +x 00:28:37.515 17:20:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.515 17:20:53 -- host/discovery.sh@100 -- # sleep 1 00:28:38.078 [2024-07-20 17:20:54.145068] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:38.078 [2024-07-20 17:20:54.145103] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:38.078 [2024-07-20 17:20:54.145132] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.078 [2024-07-20 17:20:54.231404] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:38.335 [2024-07-20 17:20:54.413557] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:38.335 [2024-07-20 17:20:54.413584] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:38.335 17:20:54 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:38.335 17:20:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:38.335 17:20:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:38.335 17:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.335 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.335 17:20:54 -- host/discovery.sh@59 -- # sort 00:28:38.335 17:20:54 -- host/discovery.sh@59 -- # xargs 00:28:38.335 17:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@102 -- # get_bdev_list 00:28:38.592 17:20:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.592 17:20:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:38.592 17:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.592 17:20:54 -- host/discovery.sh@55 -- # sort 00:28:38.592 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.592 17:20:54 -- host/discovery.sh@55 -- # xargs 00:28:38.592 17:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:38.592 17:20:54 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:38.592 17:20:54 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:38.592 17:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.592 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.592 17:20:54 -- host/discovery.sh@63 -- # sort -n 00:28:38.592 17:20:54 -- host/discovery.sh@63 -- # xargs 00:28:38.592 17:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@104 -- # get_notification_count 00:28:38.592 17:20:54 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:38.592 17:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.592 17:20:54 -- host/discovery.sh@74 -- # jq '. | length' 00:28:38.592 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.592 17:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@74 -- # notification_count=1 00:28:38.592 17:20:54 -- host/discovery.sh@75 -- # notify_id=1 00:28:38.592 17:20:54 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:38.592 17:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.592 17:20:54 -- common/autotest_common.sh@10 -- # set +x 00:28:38.592 17:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.592 17:20:54 -- host/discovery.sh@109 -- # sleep 1 00:28:39.523 17:20:55 -- host/discovery.sh@110 -- # get_bdev_list 00:28:39.523 17:20:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.523 17:20:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:39.523 17:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.523 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.523 17:20:55 -- host/discovery.sh@55 -- # sort 00:28:39.523 17:20:55 -- host/discovery.sh@55 -- # xargs 00:28:39.782 17:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.782 17:20:55 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:39.782 17:20:55 -- host/discovery.sh@111 -- # get_notification_count 00:28:39.782 17:20:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:39.782 17:20:55 -- host/discovery.sh@74 -- # jq '. | length' 00:28:39.782 17:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.782 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 17:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.782 17:20:55 -- host/discovery.sh@74 -- # notification_count=1 00:28:39.782 17:20:55 -- host/discovery.sh@75 -- # notify_id=2 00:28:39.782 17:20:55 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:39.782 17:20:55 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:39.782 17:20:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.782 17:20:55 -- common/autotest_common.sh@10 -- # set +x 00:28:39.782 [2024-07-20 17:20:55.754388] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:39.782 [2024-07-20 17:20:55.755589] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:39.782 [2024-07-20 17:20:55.755635] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:39.782 17:20:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.782 17:20:55 -- host/discovery.sh@117 -- # sleep 1 00:28:39.782 [2024-07-20 17:20:55.843833] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:39.783 [2024-07-20 17:20:55.908678] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:39.783 [2024-07-20 17:20:55.908708] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:39.783 [2024-07-20 17:20:55.908720] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:40.721 17:20:56 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:40.721 17:20:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:40.721 17:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.721 17:20:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:40.721 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.721 17:20:56 -- host/discovery.sh@59 -- # sort 00:28:40.721 17:20:56 -- host/discovery.sh@59 -- # xargs 00:28:40.721 17:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.721 17:20:56 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.721 17:20:56 -- host/discovery.sh@119 -- # get_bdev_list 00:28:40.721 17:20:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.721 17:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.721 17:20:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:40.721 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.721 17:20:56 -- host/discovery.sh@55 -- # sort 00:28:40.721 17:20:56 -- host/discovery.sh@55 -- # xargs 00:28:40.721 17:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.721 17:20:56 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:40.721 17:20:56 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:40.721 17:20:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:40.721 17:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.721 17:20:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:40.721 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.721 17:20:56 -- host/discovery.sh@63 -- # sort -n 00:28:40.721 17:20:56 -- host/discovery.sh@63 -- # xargs 00:28:40.721 17:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.980 17:20:56 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:40.980 17:20:56 -- host/discovery.sh@121 -- # get_notification_count 00:28:40.980 17:20:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:40.980 17:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.980 17:20:56 -- host/discovery.sh@74 -- # jq '. | length' 00:28:40.980 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.980 17:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.980 17:20:56 -- host/discovery.sh@74 -- # notification_count=0 00:28:40.980 17:20:56 -- host/discovery.sh@75 -- # notify_id=2 00:28:40.980 17:20:56 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:40.980 17:20:56 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:40.980 17:20:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.980 17:20:56 -- common/autotest_common.sh@10 -- # set +x 00:28:40.980 [2024-07-20 17:20:56.918821] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:40.980 [2024-07-20 17:20:56.918876] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:40.980 17:20:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.980 17:20:56 -- host/discovery.sh@127 -- # sleep 1 00:28:40.980 [2024-07-20 17:20:56.927329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.980 [2024-07-20 17:20:56.927365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.980 [2024-07-20 17:20:56.927384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.980 [2024-07-20 17:20:56.927401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.980 [2024-07-20 17:20:56.927416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.980 [2024-07-20 17:20:56.927445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.980 [2024-07-20 17:20:56.927459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.980 [2024-07-20 17:20:56.927473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.980 [2024-07-20 17:20:56.927496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.980 [2024-07-20 17:20:56.937333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.980 [2024-07-20 17:20:56.947378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.980 [2024-07-20 17:20:56.947733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.948045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.948081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773b60 with addr=10.0.0.2, port=4420 00:28:40.980 [2024-07-20 17:20:56.948113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.980 [2024-07-20 17:20:56.948139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.980 [2024-07-20 17:20:56.948178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.980 [2024-07-20 17:20:56.948199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.980 [2024-07-20 17:20:56.948217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.980 [2024-07-20 17:20:56.948254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.980 [2024-07-20 17:20:56.957457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.980 [2024-07-20 17:20:56.957779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.958000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.958026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773b60 with addr=10.0.0.2, port=4420 00:28:40.980 [2024-07-20 17:20:56.958042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.980 [2024-07-20 17:20:56.958063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.980 [2024-07-20 17:20:56.958083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.980 [2024-07-20 17:20:56.958097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.980 [2024-07-20 17:20:56.958109] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.980 [2024-07-20 17:20:56.958128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.980 [2024-07-20 17:20:56.967545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.980 [2024-07-20 17:20:56.967847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.968081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.968107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773b60 with addr=10.0.0.2, port=4420 00:28:40.980 [2024-07-20 17:20:56.968123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.980 [2024-07-20 17:20:56.968145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.980 [2024-07-20 17:20:56.968165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.980 [2024-07-20 17:20:56.968193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.980 [2024-07-20 17:20:56.968214] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.980 [2024-07-20 17:20:56.968237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.980 [2024-07-20 17:20:56.977625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.980 [2024-07-20 17:20:56.977913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.978211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.978239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773b60 with addr=10.0.0.2, port=4420 00:28:40.980 [2024-07-20 17:20:56.978256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.980 [2024-07-20 17:20:56.978281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.980 [2024-07-20 17:20:56.978316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.980 [2024-07-20 17:20:56.978336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.980 [2024-07-20 17:20:56.978366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.980 [2024-07-20 17:20:56.978385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.980 [2024-07-20 17:20:56.987701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.980 [2024-07-20 17:20:56.987991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.988235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.980 [2024-07-20 17:20:56.988260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773b60 with addr=10.0.0.2, port=4420 00:28:40.980 [2024-07-20 17:20:56.988276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.980 [2024-07-20 17:20:56.988298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.980 [2024-07-20 17:20:56.988317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.981 [2024-07-20 17:20:56.988346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.981 [2024-07-20 17:20:56.988361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.981 [2024-07-20 17:20:56.988398] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.981 [2024-07-20 17:20:56.997778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:40.981 [2024-07-20 17:20:56.998100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.981 [2024-07-20 17:20:56.998360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.981 [2024-07-20 17:20:56.998387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x773b60 with addr=10.0.0.2, port=4420 00:28:40.981 [2024-07-20 17:20:56.998405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x773b60 is same with the state(5) to be set 00:28:40.981 [2024-07-20 17:20:56.998429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x773b60 (9): Bad file descriptor 00:28:40.981 [2024-07-20 17:20:56.998464] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:40.981 [2024-07-20 17:20:56.998483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:40.981 [2024-07-20 17:20:56.998498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:40.981 [2024-07-20 17:20:56.998533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:40.981 [2024-07-20 17:20:57.005482] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:40.981 [2024-07-20 17:20:57.005516] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:41.915 17:20:57 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:41.915 17:20:57 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:41.915 17:20:57 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:41.915 17:20:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.915 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:28:41.915 17:20:57 -- host/discovery.sh@59 -- # sort 00:28:41.915 17:20:57 -- host/discovery.sh@59 -- # xargs 00:28:41.915 17:20:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.915 17:20:57 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.915 17:20:57 -- host/discovery.sh@129 -- # get_bdev_list 00:28:41.915 17:20:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.915 17:20:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.915 17:20:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:41.915 17:20:57 -- common/autotest_common.sh@10 -- # set +x 00:28:41.915 17:20:57 -- host/discovery.sh@55 -- # sort 00:28:41.915 17:20:57 -- host/discovery.sh@55 -- # xargs 00:28:41.915 17:20:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.915 17:20:58 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:41.915 17:20:58 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:41.915 17:20:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:41.915 17:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.915 17:20:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:41.915 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.915 17:20:58 -- host/discovery.sh@63 -- # sort -n 00:28:41.915 17:20:58 -- host/discovery.sh@63 -- # xargs 00:28:41.915 17:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.915 17:20:58 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:41.915 17:20:58 -- host/discovery.sh@131 -- # get_notification_count 00:28:41.915 17:20:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:41.915 17:20:58 -- host/discovery.sh@74 -- # jq '. | length' 00:28:41.915 17:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.915 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:28:41.915 17:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.173 17:20:58 -- host/discovery.sh@74 -- # notification_count=0 00:28:42.173 17:20:58 -- host/discovery.sh@75 -- # notify_id=2 00:28:42.173 17:20:58 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:42.173 17:20:58 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:42.173 17:20:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.173 17:20:58 -- common/autotest_common.sh@10 -- # set +x 00:28:42.173 17:20:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.173 17:20:58 -- host/discovery.sh@135 -- # sleep 1 00:28:43.104 17:20:59 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:43.104 17:20:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:43.104 17:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.104 17:20:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:43.104 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:28:43.104 17:20:59 -- host/discovery.sh@59 -- # sort 00:28:43.104 17:20:59 -- host/discovery.sh@59 -- # xargs 00:28:43.104 17:20:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.104 17:20:59 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:43.104 17:20:59 -- host/discovery.sh@137 -- # get_bdev_list 00:28:43.104 17:20:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.104 17:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.104 17:20:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:43.104 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:28:43.104 17:20:59 -- host/discovery.sh@55 -- # sort 00:28:43.104 17:20:59 -- host/discovery.sh@55 -- # xargs 00:28:43.104 17:20:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.104 17:20:59 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:43.104 17:20:59 -- host/discovery.sh@138 -- # get_notification_count 00:28:43.104 17:20:59 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:43.104 17:20:59 -- host/discovery.sh@74 -- # jq '. | length' 00:28:43.104 17:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.104 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:28:43.104 17:20:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.104 17:20:59 -- host/discovery.sh@74 -- # notification_count=2 00:28:43.104 17:20:59 -- host/discovery.sh@75 -- # notify_id=4 00:28:43.104 17:20:59 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:43.104 17:20:59 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:43.104 17:20:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.104 17:20:59 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 [2024-07-20 17:21:00.292059] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:44.474 [2024-07-20 17:21:00.292095] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:44.474 [2024-07-20 17:21:00.292133] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:44.474 [2024-07-20 17:21:00.378430] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:44.474 [2024-07-20 17:21:00.443786] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:44.474 [2024-07-20 17:21:00.443846] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.474 17:21:00 -- common/autotest_common.sh@640 -- # local es=0 00:28:44.474 17:21:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.474 17:21:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.474 17:21:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 request: 00:28:44.474 { 00:28:44.474 "name": "nvme", 00:28:44.474 "trtype": "tcp", 00:28:44.474 "traddr": "10.0.0.2", 00:28:44.474 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:44.474 "adrfam": "ipv4", 00:28:44.474 "trsvcid": "8009", 00:28:44.474 "wait_for_attach": true, 00:28:44.474 "method": "bdev_nvme_start_discovery", 00:28:44.474 "req_id": 1 00:28:44.474 } 00:28:44.474 Got JSON-RPC error response 00:28:44.474 response: 00:28:44.474 { 00:28:44.474 "code": -17, 00:28:44.474 "message": "File exists" 00:28:44.474 } 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:44.474 17:21:00 -- common/autotest_common.sh@643 -- # es=1 00:28:44.474 17:21:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:44.474 17:21:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:44.474 17:21:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:44.474 17:21:00 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # sort 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # xargs 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:44.474 17:21:00 -- host/discovery.sh@147 -- # get_bdev_list 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # sort 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # xargs 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.474 17:21:00 -- common/autotest_common.sh@640 -- # local es=0 00:28:44.474 17:21:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.474 17:21:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.474 17:21:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 request: 00:28:44.474 { 00:28:44.474 "name": "nvme_second", 00:28:44.474 "trtype": "tcp", 00:28:44.474 "traddr": "10.0.0.2", 00:28:44.474 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:44.474 "adrfam": "ipv4", 00:28:44.474 "trsvcid": "8009", 00:28:44.474 "wait_for_attach": true, 00:28:44.474 "method": "bdev_nvme_start_discovery", 00:28:44.474 "req_id": 1 00:28:44.474 } 00:28:44.474 Got JSON-RPC error response 00:28:44.474 response: 00:28:44.474 { 00:28:44.474 "code": -17, 00:28:44.474 "message": "File exists" 00:28:44.474 } 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:44.474 17:21:00 -- common/autotest_common.sh@643 -- # es=1 00:28:44.474 17:21:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:44.474 17:21:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:44.474 17:21:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:44.474 17:21:00 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # sort 00:28:44.474 17:21:00 -- host/discovery.sh@67 -- # xargs 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:44.474 17:21:00 -- host/discovery.sh@153 -- # get_bdev_list 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # sort 00:28:44.474 17:21:00 -- host/discovery.sh@55 -- # xargs 00:28:44.474 17:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:44.474 17:21:00 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:44.474 17:21:00 -- common/autotest_common.sh@640 -- # local es=0 00:28:44.474 17:21:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:44.474 17:21:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:44.474 17:21:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:44.474 17:21:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:44.474 17:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.474 17:21:00 -- common/autotest_common.sh@10 -- # set +x 00:28:45.848 [2024-07-20 17:21:01.639343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.848 [2024-07-20 17:21:01.639629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.848 [2024-07-20 17:21:01.639672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7700e0 with addr=10.0.0.2, port=8010 00:28:45.848 [2024-07-20 17:21:01.639701] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:45.848 [2024-07-20 17:21:01.639717] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:45.848 [2024-07-20 17:21:01.639731] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:46.781 [2024-07-20 17:21:02.641736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.781 [2024-07-20 17:21:02.642037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.781 [2024-07-20 17:21:02.642065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7700e0 with addr=10.0.0.2, port=8010 00:28:46.781 [2024-07-20 17:21:02.642084] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:46.781 [2024-07-20 17:21:02.642114] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:46.781 [2024-07-20 17:21:02.642127] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:47.745 [2024-07-20 17:21:03.643882] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:47.745 request: 00:28:47.745 { 00:28:47.745 "name": "nvme_second", 00:28:47.745 "trtype": "tcp", 00:28:47.745 "traddr": "10.0.0.2", 00:28:47.745 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:47.745 "adrfam": "ipv4", 00:28:47.745 "trsvcid": "8010", 00:28:47.745 "attach_timeout_ms": 3000, 00:28:47.745 "method": "bdev_nvme_start_discovery", 00:28:47.745 "req_id": 1 00:28:47.745 } 00:28:47.745 Got JSON-RPC error response 00:28:47.745 response: 00:28:47.745 { 00:28:47.745 "code": -110, 00:28:47.745 "message": "Connection timed out" 00:28:47.745 } 00:28:47.745 17:21:03 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:47.745 17:21:03 -- common/autotest_common.sh@643 -- # es=1 00:28:47.745 17:21:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:47.745 17:21:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:47.745 17:21:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:47.745 17:21:03 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:47.745 17:21:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:47.745 17:21:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.745 17:21:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:47.745 17:21:03 -- common/autotest_common.sh@10 -- # set +x 00:28:47.745 17:21:03 -- host/discovery.sh@67 -- # sort 00:28:47.745 17:21:03 -- host/discovery.sh@67 -- # xargs 00:28:47.745 17:21:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.745 17:21:03 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:47.745 17:21:03 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:47.745 17:21:03 -- host/discovery.sh@162 -- # kill 650846 00:28:47.745 17:21:03 -- host/discovery.sh@163 -- # nvmftestfini 00:28:47.745 17:21:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:47.745 17:21:03 -- nvmf/common.sh@116 -- # sync 00:28:47.745 17:21:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:47.745 17:21:03 -- nvmf/common.sh@119 -- # set +e 00:28:47.745 17:21:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:47.745 17:21:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:47.745 rmmod nvme_tcp 00:28:47.745 rmmod nvme_fabrics 00:28:47.745 rmmod nvme_keyring 00:28:47.745 17:21:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:47.745 17:21:03 -- nvmf/common.sh@123 -- # set -e 00:28:47.745 17:21:03 -- nvmf/common.sh@124 -- # return 0 00:28:47.745 17:21:03 -- nvmf/common.sh@477 -- # '[' -n 650692 ']' 00:28:47.745 17:21:03 -- nvmf/common.sh@478 -- # killprocess 650692 00:28:47.745 17:21:03 -- common/autotest_common.sh@926 -- # '[' -z 650692 ']' 00:28:47.745 17:21:03 -- common/autotest_common.sh@930 -- # kill -0 650692 00:28:47.745 17:21:03 -- common/autotest_common.sh@931 -- # uname 00:28:47.745 17:21:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:47.745 17:21:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 650692 00:28:47.745 17:21:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:47.745 17:21:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:47.745 17:21:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 650692' 00:28:47.745 killing process with pid 650692 00:28:47.745 17:21:03 -- common/autotest_common.sh@945 -- # kill 650692 00:28:47.745 17:21:03 -- common/autotest_common.sh@950 -- # wait 650692 00:28:48.002 17:21:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:48.002 17:21:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:48.002 17:21:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:48.002 17:21:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.002 17:21:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:48.002 17:21:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.002 17:21:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.002 17:21:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.898 17:21:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:49.898 00:28:49.898 real 0m17.227s 00:28:49.898 user 0m26.591s 00:28:49.898 sys 0m2.897s 00:28:49.898 17:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.898 17:21:06 -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 ************************************ 00:28:49.898 END TEST nvmf_discovery 00:28:49.898 ************************************ 00:28:49.898 17:21:06 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:49.898 17:21:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:49.898 17:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.898 17:21:06 -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 ************************************ 00:28:49.898 START TEST nvmf_discovery_remove_ifc 00:28:49.898 ************************************ 00:28:49.898 17:21:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:50.156 * Looking for test storage... 00:28:50.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:50.156 17:21:06 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.156 17:21:06 -- nvmf/common.sh@7 -- # uname -s 00:28:50.156 17:21:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.156 17:21:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.156 17:21:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.156 17:21:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.156 17:21:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.156 17:21:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.156 17:21:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.156 17:21:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.156 17:21:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.157 17:21:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.157 17:21:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.157 17:21:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:50.157 17:21:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.157 17:21:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.157 17:21:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.157 17:21:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.157 17:21:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.157 17:21:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.157 17:21:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.157 17:21:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.157 17:21:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.157 17:21:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.157 17:21:06 -- paths/export.sh@5 -- # export PATH 00:28:50.157 17:21:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.157 17:21:06 -- nvmf/common.sh@46 -- # : 0 00:28:50.157 17:21:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:50.157 17:21:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:50.157 17:21:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:50.157 17:21:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.157 17:21:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.157 17:21:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:50.157 17:21:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:50.157 17:21:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:50.157 17:21:06 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:50.157 17:21:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:50.157 17:21:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.157 17:21:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:50.157 17:21:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:50.157 17:21:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:50.157 17:21:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.157 17:21:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.157 17:21:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.157 17:21:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:50.157 17:21:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:50.157 17:21:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:50.157 17:21:06 -- common/autotest_common.sh@10 -- # set +x 00:28:52.057 17:21:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:52.057 17:21:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:52.057 17:21:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:52.057 17:21:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:52.057 17:21:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:52.057 17:21:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:52.057 17:21:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:52.057 17:21:07 -- nvmf/common.sh@294 -- # net_devs=() 00:28:52.057 17:21:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:52.057 17:21:07 -- nvmf/common.sh@295 -- # e810=() 00:28:52.057 17:21:07 -- nvmf/common.sh@295 -- # local -ga e810 00:28:52.057 17:21:07 -- nvmf/common.sh@296 -- # x722=() 00:28:52.057 17:21:07 -- nvmf/common.sh@296 -- # local -ga x722 00:28:52.057 17:21:07 -- nvmf/common.sh@297 -- # mlx=() 00:28:52.057 17:21:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:52.057 17:21:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:52.057 17:21:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:52.057 17:21:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:52.057 17:21:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:52.057 17:21:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:52.057 17:21:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:52.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:52.057 17:21:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:52.057 17:21:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:52.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:52.057 17:21:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:52.057 17:21:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:52.057 17:21:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.057 17:21:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:52.057 17:21:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.057 17:21:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:52.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:52.057 17:21:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.057 17:21:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:52.057 17:21:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:52.057 17:21:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:52.057 17:21:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:52.057 17:21:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:52.057 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:52.057 17:21:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:52.057 17:21:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:52.057 17:21:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:52.057 17:21:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:52.057 17:21:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:52.057 17:21:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:52.057 17:21:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:52.057 17:21:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:52.057 17:21:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:52.057 17:21:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:52.057 17:21:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:52.057 17:21:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:52.057 17:21:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:52.057 17:21:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:52.057 17:21:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:52.057 17:21:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:52.057 17:21:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:52.057 17:21:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:52.057 17:21:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:52.057 17:21:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:52.057 17:21:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:52.057 17:21:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:52.057 17:21:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:52.057 17:21:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:52.057 17:21:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:52.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:52.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:28:52.057 00:28:52.057 --- 10.0.0.2 ping statistics --- 00:28:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.057 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:28:52.057 17:21:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:52.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:52.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:52.057 00:28:52.057 --- 10.0.0.1 ping statistics --- 00:28:52.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:52.057 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:52.057 17:21:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:52.057 17:21:08 -- nvmf/common.sh@410 -- # return 0 00:28:52.057 17:21:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:52.057 17:21:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:52.057 17:21:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:52.057 17:21:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:52.057 17:21:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:52.057 17:21:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:52.057 17:21:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:52.057 17:21:08 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:52.057 17:21:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:52.057 17:21:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:52.057 17:21:08 -- common/autotest_common.sh@10 -- # set +x 00:28:52.057 17:21:08 -- nvmf/common.sh@469 -- # nvmfpid=654733 00:28:52.057 17:21:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:52.057 17:21:08 -- nvmf/common.sh@470 -- # waitforlisten 654733 00:28:52.057 17:21:08 -- common/autotest_common.sh@819 -- # '[' -z 654733 ']' 00:28:52.057 17:21:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.057 17:21:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:52.057 17:21:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.057 17:21:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:52.057 17:21:08 -- common/autotest_common.sh@10 -- # set +x 00:28:52.057 [2024-07-20 17:21:08.128302] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:52.057 [2024-07-20 17:21:08.128374] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.057 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.057 [2024-07-20 17:21:08.199629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.315 [2024-07-20 17:21:08.292939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:52.315 [2024-07-20 17:21:08.293118] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.315 [2024-07-20 17:21:08.293138] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.315 [2024-07-20 17:21:08.293153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.315 [2024-07-20 17:21:08.293184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.250 17:21:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:53.250 17:21:09 -- common/autotest_common.sh@852 -- # return 0 00:28:53.250 17:21:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:53.250 17:21:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:53.250 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.250 17:21:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.250 17:21:09 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:53.250 17:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.250 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.250 [2024-07-20 17:21:09.141406] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:53.250 [2024-07-20 17:21:09.149570] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:53.250 null0 00:28:53.250 [2024-07-20 17:21:09.181539] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.250 17:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.250 17:21:09 -- host/discovery_remove_ifc.sh@59 -- # hostpid=655067 00:28:53.250 17:21:09 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 655067 /tmp/host.sock 00:28:53.250 17:21:09 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:53.250 17:21:09 -- common/autotest_common.sh@819 -- # '[' -z 655067 ']' 00:28:53.250 17:21:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:53.250 17:21:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:53.250 17:21:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:53.250 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:53.250 17:21:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:53.250 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.250 [2024-07-20 17:21:09.243829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:53.250 [2024-07-20 17:21:09.243898] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid655067 ] 00:28:53.250 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.250 [2024-07-20 17:21:09.306346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.250 [2024-07-20 17:21:09.394556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:53.250 [2024-07-20 17:21:09.394747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.509 17:21:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:53.509 17:21:09 -- common/autotest_common.sh@852 -- # return 0 00:28:53.509 17:21:09 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:53.509 17:21:09 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:53.509 17:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.509 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.509 17:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.509 17:21:09 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:53.509 17:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.510 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:28:53.510 17:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.510 17:21:09 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:53.510 17:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.510 17:21:09 -- common/autotest_common.sh@10 -- # set +x 00:28:54.445 [2024-07-20 17:21:10.570925] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:54.445 [2024-07-20 17:21:10.570970] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:54.445 [2024-07-20 17:21:10.571003] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:54.703 [2024-07-20 17:21:10.697400] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:54.962 [2024-07-20 17:21:10.921898] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:54.962 [2024-07-20 17:21:10.921951] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:54.962 [2024-07-20 17:21:10.921986] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:54.962 [2024-07-20 17:21:10.922012] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:54.962 [2024-07-20 17:21:10.922049] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:54.962 17:21:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.962 17:21:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.962 17:21:10 -- common/autotest_common.sh@10 -- # set +x 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.962 [2024-07-20 17:21:10.928470] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x119e3f0 was disconnected and freed. delete nvme_qpair. 00:28:54.962 17:21:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:54.962 17:21:10 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.962 17:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.962 17:21:11 -- common/autotest_common.sh@10 -- # set +x 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.962 17:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.962 17:21:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.337 17:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.337 17:21:12 -- common/autotest_common.sh@10 -- # set +x 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:56.337 17:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.337 17:21:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.290 17:21:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.290 17:21:13 -- common/autotest_common.sh@10 -- # set +x 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.290 17:21:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:57.290 17:21:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.220 17:21:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.220 17:21:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.221 17:21:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.221 17:21:14 -- common/autotest_common.sh@10 -- # set +x 00:28:58.221 17:21:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.221 17:21:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.221 17:21:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.221 17:21:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.221 17:21:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:58.221 17:21:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.150 17:21:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.150 17:21:15 -- common/autotest_common.sh@10 -- # set +x 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.150 17:21:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:59.150 17:21:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:00.083 17:21:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.083 17:21:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.083 17:21:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.083 17:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.083 17:21:16 -- common/autotest_common.sh@10 -- # set +x 00:29:00.083 17:21:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.083 17:21:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.341 17:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.341 17:21:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:00.341 17:21:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:00.341 [2024-07-20 17:21:16.362750] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:00.341 [2024-07-20 17:21:16.362841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.341 [2024-07-20 17:21:16.362863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.341 [2024-07-20 17:21:16.362880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.341 [2024-07-20 17:21:16.362894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.341 [2024-07-20 17:21:16.362912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.341 [2024-07-20 17:21:16.362925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.341 [2024-07-20 17:21:16.362938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.341 [2024-07-20 17:21:16.362950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.341 [2024-07-20 17:21:16.362964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.341 [2024-07-20 17:21:16.362976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.341 [2024-07-20 17:21:16.362989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1164850 is same with the state(5) to be set 00:29:00.341 [2024-07-20 17:21:16.372768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1164850 (9): Bad file descriptor 00:29:00.341 [2024-07-20 17:21:16.382844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:01.272 17:21:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:01.272 17:21:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.272 17:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.272 17:21:17 -- common/autotest_common.sh@10 -- # set +x 00:29:01.272 17:21:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:01.272 17:21:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:01.272 17:21:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:01.272 [2024-07-20 17:21:17.417824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:02.641 [2024-07-20 17:21:18.441832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:02.641 [2024-07-20 17:21:18.441885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1164850 with addr=10.0.0.2, port=4420 00:29:02.641 [2024-07-20 17:21:18.441913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1164850 is same with the state(5) to be set 00:29:02.641 [2024-07-20 17:21:18.441951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:02.641 [2024-07-20 17:21:18.441971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:02.641 [2024-07-20 17:21:18.441986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:02.641 [2024-07-20 17:21:18.442004] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:29:02.641 [2024-07-20 17:21:18.442437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1164850 (9): Bad file descriptor 00:29:02.641 [2024-07-20 17:21:18.442482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:02.641 [2024-07-20 17:21:18.442531] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:02.641 [2024-07-20 17:21:18.442573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.641 [2024-07-20 17:21:18.442598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.641 [2024-07-20 17:21:18.442620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.641 [2024-07-20 17:21:18.442636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.641 [2024-07-20 17:21:18.442652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.641 [2024-07-20 17:21:18.442667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.641 [2024-07-20 17:21:18.442682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.641 [2024-07-20 17:21:18.442697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.641 [2024-07-20 17:21:18.442713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.641 [2024-07-20 17:21:18.442727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.641 [2024-07-20 17:21:18.442743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:02.641 [2024-07-20 17:21:18.442951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1164c60 (9): Bad file descriptor 00:29:02.641 [2024-07-20 17:21:18.443973] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:02.641 [2024-07-20 17:21:18.443995] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:02.642 17:21:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:02.642 17:21:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:02.642 17:21:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:03.574 17:21:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.574 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:03.574 17:21:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.574 17:21:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.574 17:21:19 -- common/autotest_common.sh@10 -- # set +x 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:03.574 17:21:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:03.574 17:21:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.507 [2024-07-20 17:21:20.456795] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:04.507 [2024-07-20 17:21:20.456832] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:04.507 [2024-07-20 17:21:20.456855] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:04.507 [2024-07-20 17:21:20.584320] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:04.507 17:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:04.507 17:21:20 -- common/autotest_common.sh@10 -- # set +x 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:04.507 17:21:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:04.507 17:21:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:04.764 [2024-07-20 17:21:20.686649] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:04.764 [2024-07-20 17:21:20.686704] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:04.764 [2024-07-20 17:21:20.686741] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:04.764 [2024-07-20 17:21:20.686766] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:04.764 [2024-07-20 17:21:20.686783] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:04.764 [2024-07-20 17:21:20.694562] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11a8aa0 was disconnected and freed. delete nvme_qpair. 00:29:05.696 17:21:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:05.696 17:21:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:05.696 17:21:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:05.696 17:21:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:05.696 17:21:21 -- common/autotest_common.sh@10 -- # set +x 00:29:05.696 17:21:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:05.697 17:21:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:05.697 17:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.697 17:21:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:05.697 17:21:21 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:05.697 17:21:21 -- host/discovery_remove_ifc.sh@90 -- # killprocess 655067 00:29:05.697 17:21:21 -- common/autotest_common.sh@926 -- # '[' -z 655067 ']' 00:29:05.697 17:21:21 -- common/autotest_common.sh@930 -- # kill -0 655067 00:29:05.697 17:21:21 -- common/autotest_common.sh@931 -- # uname 00:29:05.697 17:21:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.697 17:21:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 655067 00:29:05.697 17:21:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:05.697 17:21:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:05.697 17:21:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 655067' 00:29:05.697 killing process with pid 655067 00:29:05.697 17:21:21 -- common/autotest_common.sh@945 -- # kill 655067 00:29:05.697 17:21:21 -- common/autotest_common.sh@950 -- # wait 655067 00:29:05.954 17:21:21 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:05.954 17:21:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:05.954 17:21:21 -- nvmf/common.sh@116 -- # sync 00:29:05.954 17:21:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:05.954 17:21:21 -- nvmf/common.sh@119 -- # set +e 00:29:05.954 17:21:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:05.954 17:21:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:05.954 rmmod nvme_tcp 00:29:05.954 rmmod nvme_fabrics 00:29:05.954 rmmod nvme_keyring 00:29:05.954 17:21:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:05.954 17:21:21 -- nvmf/common.sh@123 -- # set -e 00:29:05.954 17:21:21 -- nvmf/common.sh@124 -- # return 0 00:29:05.954 17:21:21 -- nvmf/common.sh@477 -- # '[' -n 654733 ']' 00:29:05.954 17:21:21 -- nvmf/common.sh@478 -- # killprocess 654733 00:29:05.954 17:21:21 -- common/autotest_common.sh@926 -- # '[' -z 654733 ']' 00:29:05.954 17:21:21 -- common/autotest_common.sh@930 -- # kill -0 654733 00:29:05.954 17:21:21 -- common/autotest_common.sh@931 -- # uname 00:29:05.954 17:21:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.954 17:21:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 654733 00:29:05.954 17:21:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:05.954 17:21:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:05.954 17:21:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 654733' 00:29:05.954 killing process with pid 654733 00:29:05.954 17:21:22 -- common/autotest_common.sh@945 -- # kill 654733 00:29:05.954 17:21:22 -- common/autotest_common.sh@950 -- # wait 654733 00:29:06.212 17:21:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:06.212 17:21:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:06.212 17:21:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:06.212 17:21:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:06.212 17:21:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:06.212 17:21:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.212 17:21:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.212 17:21:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.744 17:21:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:08.744 00:29:08.744 real 0m18.243s 00:29:08.744 user 0m25.573s 00:29:08.744 sys 0m2.811s 00:29:08.744 17:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.744 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.744 ************************************ 00:29:08.744 END TEST nvmf_discovery_remove_ifc 00:29:08.744 ************************************ 00:29:08.744 17:21:24 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:08.744 17:21:24 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:08.744 17:21:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:08.744 17:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:08.744 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:29:08.744 ************************************ 00:29:08.744 START TEST nvmf_digest 00:29:08.744 ************************************ 00:29:08.744 17:21:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:08.744 * Looking for test storage... 00:29:08.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:08.744 17:21:24 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:08.744 17:21:24 -- nvmf/common.sh@7 -- # uname -s 00:29:08.744 17:21:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:08.744 17:21:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:08.744 17:21:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:08.744 17:21:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:08.744 17:21:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:08.744 17:21:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:08.744 17:21:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:08.744 17:21:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:08.744 17:21:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:08.744 17:21:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:08.744 17:21:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.744 17:21:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:08.744 17:21:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:08.744 17:21:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:08.744 17:21:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:08.744 17:21:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:08.744 17:21:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:08.744 17:21:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:08.744 17:21:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:08.744 17:21:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.744 17:21:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.744 17:21:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.744 17:21:24 -- paths/export.sh@5 -- # export PATH 00:29:08.744 17:21:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:08.744 17:21:24 -- nvmf/common.sh@46 -- # : 0 00:29:08.744 17:21:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:08.744 17:21:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:08.744 17:21:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:08.744 17:21:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:08.744 17:21:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:08.744 17:21:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:08.744 17:21:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:08.744 17:21:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:08.744 17:21:24 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:08.744 17:21:24 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:08.744 17:21:24 -- host/digest.sh@16 -- # runtime=2 00:29:08.744 17:21:24 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:08.744 17:21:24 -- host/digest.sh@132 -- # nvmftestinit 00:29:08.744 17:21:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:08.744 17:21:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:08.744 17:21:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:08.744 17:21:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:08.744 17:21:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:08.744 17:21:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.744 17:21:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:08.744 17:21:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:08.744 17:21:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:08.744 17:21:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:08.744 17:21:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:08.744 17:21:24 -- common/autotest_common.sh@10 -- # set +x 00:29:10.645 17:21:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:10.645 17:21:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:10.645 17:21:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:10.645 17:21:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:10.645 17:21:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:10.645 17:21:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:10.645 17:21:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:10.645 17:21:26 -- nvmf/common.sh@294 -- # net_devs=() 00:29:10.645 17:21:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:10.645 17:21:26 -- nvmf/common.sh@295 -- # e810=() 00:29:10.645 17:21:26 -- nvmf/common.sh@295 -- # local -ga e810 00:29:10.645 17:21:26 -- nvmf/common.sh@296 -- # x722=() 00:29:10.645 17:21:26 -- nvmf/common.sh@296 -- # local -ga x722 00:29:10.645 17:21:26 -- nvmf/common.sh@297 -- # mlx=() 00:29:10.645 17:21:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:10.645 17:21:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.645 17:21:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.646 17:21:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.646 17:21:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:10.646 17:21:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:10.646 17:21:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:10.646 17:21:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:10.646 17:21:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.646 17:21:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:10.646 17:21:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.646 17:21:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:10.646 17:21:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:10.646 17:21:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.646 17:21:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:10.646 17:21:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.646 17:21:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.646 17:21:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.646 17:21:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:10.646 17:21:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.646 17:21:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:10.646 17:21:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.646 17:21:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.646 17:21:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.646 17:21:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:10.646 17:21:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:10.646 17:21:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:10.646 17:21:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.646 17:21:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.646 17:21:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.646 17:21:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:10.646 17:21:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.646 17:21:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.646 17:21:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:10.646 17:21:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.646 17:21:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.646 17:21:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:10.646 17:21:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:10.646 17:21:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.646 17:21:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.646 17:21:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.646 17:21:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.646 17:21:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:10.646 17:21:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.646 17:21:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.646 17:21:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.646 17:21:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:10.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:29:10.646 00:29:10.646 --- 10.0.0.2 ping statistics --- 00:29:10.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.646 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:10.646 17:21:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:29:10.646 00:29:10.646 --- 10.0.0.1 ping statistics --- 00:29:10.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.646 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:29:10.646 17:21:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.646 17:21:26 -- nvmf/common.sh@410 -- # return 0 00:29:10.646 17:21:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:10.646 17:21:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.646 17:21:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:10.646 17:21:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.646 17:21:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:10.646 17:21:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:10.646 17:21:26 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:10.646 17:21:26 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:10.646 17:21:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:10.646 17:21:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.646 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:29:10.646 ************************************ 00:29:10.646 START TEST nvmf_digest_clean 00:29:10.646 ************************************ 00:29:10.646 17:21:26 -- common/autotest_common.sh@1104 -- # run_digest 00:29:10.646 17:21:26 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:10.646 17:21:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:10.646 17:21:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:10.646 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:29:10.646 17:21:26 -- nvmf/common.sh@469 -- # nvmfpid=658603 00:29:10.646 17:21:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:10.646 17:21:26 -- nvmf/common.sh@470 -- # waitforlisten 658603 00:29:10.646 17:21:26 -- common/autotest_common.sh@819 -- # '[' -z 658603 ']' 00:29:10.646 17:21:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.646 17:21:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:10.646 17:21:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.646 17:21:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:10.646 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:29:10.646 [2024-07-20 17:21:26.555295] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:10.646 [2024-07-20 17:21:26.555376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.646 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.646 [2024-07-20 17:21:26.631115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.646 [2024-07-20 17:21:26.718372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:10.646 [2024-07-20 17:21:26.718564] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.646 [2024-07-20 17:21:26.718585] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.646 [2024-07-20 17:21:26.718600] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.646 [2024-07-20 17:21:26.718629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.646 17:21:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:10.646 17:21:26 -- common/autotest_common.sh@852 -- # return 0 00:29:10.646 17:21:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:10.646 17:21:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:10.646 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:29:10.646 17:21:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.646 17:21:26 -- host/digest.sh@120 -- # common_target_config 00:29:10.646 17:21:26 -- host/digest.sh@43 -- # rpc_cmd 00:29:10.646 17:21:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:10.646 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:29:10.905 null0 00:29:10.905 [2024-07-20 17:21:26.882326] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.905 [2024-07-20 17:21:26.906549] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.905 17:21:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:10.905 17:21:26 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:10.905 17:21:26 -- host/digest.sh@77 -- # local rw bs qd 00:29:10.905 17:21:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.905 17:21:26 -- host/digest.sh@80 -- # rw=randread 00:29:10.905 17:21:26 -- host/digest.sh@80 -- # bs=4096 00:29:10.905 17:21:26 -- host/digest.sh@80 -- # qd=128 00:29:10.905 17:21:26 -- host/digest.sh@82 -- # bperfpid=658745 00:29:10.905 17:21:26 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:10.905 17:21:26 -- host/digest.sh@83 -- # waitforlisten 658745 /var/tmp/bperf.sock 00:29:10.905 17:21:26 -- common/autotest_common.sh@819 -- # '[' -z 658745 ']' 00:29:10.905 17:21:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.905 17:21:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:10.905 17:21:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.905 17:21:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:10.905 17:21:26 -- common/autotest_common.sh@10 -- # set +x 00:29:10.905 [2024-07-20 17:21:26.950473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:10.905 [2024-07-20 17:21:26.950539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658745 ] 00:29:10.905 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.905 [2024-07-20 17:21:27.012344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.163 [2024-07-20 17:21:27.101708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.163 17:21:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.163 17:21:27 -- common/autotest_common.sh@852 -- # return 0 00:29:11.163 17:21:27 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:11.163 17:21:27 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:11.163 17:21:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:11.421 17:21:27 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.421 17:21:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.986 nvme0n1 00:29:11.986 17:21:27 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:11.986 17:21:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:11.986 Running I/O for 2 seconds... 00:29:13.882 00:29:13.882 Latency(us) 00:29:13.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.882 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:13.882 nvme0n1 : 2.00 21502.91 84.00 0.00 0.00 5945.09 3276.80 19709.35 00:29:13.882 =================================================================================================================== 00:29:13.882 Total : 21502.91 84.00 0.00 0.00 5945.09 3276.80 19709.35 00:29:13.882 0 00:29:14.140 17:21:30 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:14.140 17:21:30 -- host/digest.sh@92 -- # get_accel_stats 00:29:14.140 17:21:30 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.140 17:21:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.140 17:21:30 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.140 | select(.opcode=="crc32c") 00:29:14.140 | "\(.module_name) \(.executed)"' 00:29:14.398 17:21:30 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:14.398 17:21:30 -- host/digest.sh@93 -- # exp_module=software 00:29:14.398 17:21:30 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:14.398 17:21:30 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.398 17:21:30 -- host/digest.sh@97 -- # killprocess 658745 00:29:14.398 17:21:30 -- common/autotest_common.sh@926 -- # '[' -z 658745 ']' 00:29:14.398 17:21:30 -- common/autotest_common.sh@930 -- # kill -0 658745 00:29:14.398 17:21:30 -- common/autotest_common.sh@931 -- # uname 00:29:14.398 17:21:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.398 17:21:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 658745 00:29:14.398 17:21:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:14.398 17:21:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:14.398 17:21:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 658745' 00:29:14.398 killing process with pid 658745 00:29:14.398 17:21:30 -- common/autotest_common.sh@945 -- # kill 658745 00:29:14.398 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.398 00:29:14.398 Latency(us) 00:29:14.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.398 =================================================================================================================== 00:29:14.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.398 17:21:30 -- common/autotest_common.sh@950 -- # wait 658745 00:29:14.655 17:21:30 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:14.655 17:21:30 -- host/digest.sh@77 -- # local rw bs qd 00:29:14.655 17:21:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:14.655 17:21:30 -- host/digest.sh@80 -- # rw=randread 00:29:14.655 17:21:30 -- host/digest.sh@80 -- # bs=131072 00:29:14.655 17:21:30 -- host/digest.sh@80 -- # qd=16 00:29:14.655 17:21:30 -- host/digest.sh@82 -- # bperfpid=659168 00:29:14.655 17:21:30 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:14.655 17:21:30 -- host/digest.sh@83 -- # waitforlisten 659168 /var/tmp/bperf.sock 00:29:14.655 17:21:30 -- common/autotest_common.sh@819 -- # '[' -z 659168 ']' 00:29:14.655 17:21:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.655 17:21:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.655 17:21:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.655 17:21:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.655 17:21:30 -- common/autotest_common.sh@10 -- # set +x 00:29:14.655 [2024-07-20 17:21:30.615859] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:14.655 [2024-07-20 17:21:30.615949] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659168 ] 00:29:14.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.655 Zero copy mechanism will not be used. 00:29:14.655 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.655 [2024-07-20 17:21:30.674566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.655 [2024-07-20 17:21:30.757855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.655 17:21:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:14.655 17:21:30 -- common/autotest_common.sh@852 -- # return 0 00:29:14.655 17:21:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:14.655 17:21:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:14.655 17:21:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:15.221 17:21:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.221 17:21:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.478 nvme0n1 00:29:15.735 17:21:31 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:15.735 17:21:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.735 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.735 Zero copy mechanism will not be used. 00:29:15.735 Running I/O for 2 seconds... 00:29:17.629 00:29:17.629 Latency(us) 00:29:17.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:17.630 nvme0n1 : 2.00 1668.74 208.59 0.00 0.00 9583.48 8689.59 16796.63 00:29:17.630 =================================================================================================================== 00:29:17.630 Total : 1668.74 208.59 0.00 0.00 9583.48 8689.59 16796.63 00:29:17.630 0 00:29:17.630 17:21:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:17.630 17:21:33 -- host/digest.sh@92 -- # get_accel_stats 00:29:17.630 17:21:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:17.630 17:21:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:17.630 | select(.opcode=="crc32c") 00:29:17.630 | "\(.module_name) \(.executed)"' 00:29:17.630 17:21:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:18.195 17:21:34 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:18.195 17:21:34 -- host/digest.sh@93 -- # exp_module=software 00:29:18.195 17:21:34 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:18.195 17:21:34 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:18.195 17:21:34 -- host/digest.sh@97 -- # killprocess 659168 00:29:18.195 17:21:34 -- common/autotest_common.sh@926 -- # '[' -z 659168 ']' 00:29:18.195 17:21:34 -- common/autotest_common.sh@930 -- # kill -0 659168 00:29:18.195 17:21:34 -- common/autotest_common.sh@931 -- # uname 00:29:18.195 17:21:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.195 17:21:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 659168 00:29:18.195 17:21:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:18.195 17:21:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:18.195 17:21:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 659168' 00:29:18.195 killing process with pid 659168 00:29:18.195 17:21:34 -- common/autotest_common.sh@945 -- # kill 659168 00:29:18.195 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.195 00:29:18.195 Latency(us) 00:29:18.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.195 =================================================================================================================== 00:29:18.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.195 17:21:34 -- common/autotest_common.sh@950 -- # wait 659168 00:29:18.195 17:21:34 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:18.195 17:21:34 -- host/digest.sh@77 -- # local rw bs qd 00:29:18.195 17:21:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:18.195 17:21:34 -- host/digest.sh@80 -- # rw=randwrite 00:29:18.195 17:21:34 -- host/digest.sh@80 -- # bs=4096 00:29:18.195 17:21:34 -- host/digest.sh@80 -- # qd=128 00:29:18.195 17:21:34 -- host/digest.sh@82 -- # bperfpid=659586 00:29:18.195 17:21:34 -- host/digest.sh@83 -- # waitforlisten 659586 /var/tmp/bperf.sock 00:29:18.195 17:21:34 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:18.195 17:21:34 -- common/autotest_common.sh@819 -- # '[' -z 659586 ']' 00:29:18.195 17:21:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.195 17:21:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:18.195 17:21:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.195 17:21:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:18.195 17:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:18.195 [2024-07-20 17:21:34.328882] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:18.195 [2024-07-20 17:21:34.328973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659586 ] 00:29:18.452 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.452 [2024-07-20 17:21:34.392020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.452 [2024-07-20 17:21:34.476422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.452 17:21:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:18.452 17:21:34 -- common/autotest_common.sh@852 -- # return 0 00:29:18.452 17:21:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:18.452 17:21:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:18.452 17:21:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.025 17:21:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.025 17:21:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.281 nvme0n1 00:29:19.281 17:21:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:19.281 17:21:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.538 Running I/O for 2 seconds... 00:29:21.435 00:29:21.435 Latency(us) 00:29:21.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.435 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.435 nvme0n1 : 2.01 19067.82 74.48 0.00 0.00 6698.10 3398.16 13883.92 00:29:21.435 =================================================================================================================== 00:29:21.435 Total : 19067.82 74.48 0.00 0.00 6698.10 3398.16 13883.92 00:29:21.435 0 00:29:21.435 17:21:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:21.435 17:21:37 -- host/digest.sh@92 -- # get_accel_stats 00:29:21.435 17:21:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:21.435 17:21:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:21.435 | select(.opcode=="crc32c") 00:29:21.435 | "\(.module_name) \(.executed)"' 00:29:21.435 17:21:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.692 17:21:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:21.692 17:21:37 -- host/digest.sh@93 -- # exp_module=software 00:29:21.692 17:21:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:21.692 17:21:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:21.693 17:21:37 -- host/digest.sh@97 -- # killprocess 659586 00:29:21.693 17:21:37 -- common/autotest_common.sh@926 -- # '[' -z 659586 ']' 00:29:21.693 17:21:37 -- common/autotest_common.sh@930 -- # kill -0 659586 00:29:21.693 17:21:37 -- common/autotest_common.sh@931 -- # uname 00:29:21.693 17:21:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:21.693 17:21:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 659586 00:29:21.693 17:21:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:21.693 17:21:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:21.693 17:21:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 659586' 00:29:21.693 killing process with pid 659586 00:29:21.693 17:21:37 -- common/autotest_common.sh@945 -- # kill 659586 00:29:21.693 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.693 00:29:21.693 Latency(us) 00:29:21.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.693 =================================================================================================================== 00:29:21.693 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.693 17:21:37 -- common/autotest_common.sh@950 -- # wait 659586 00:29:21.950 17:21:37 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:21.950 17:21:37 -- host/digest.sh@77 -- # local rw bs qd 00:29:21.950 17:21:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:21.950 17:21:37 -- host/digest.sh@80 -- # rw=randwrite 00:29:21.950 17:21:37 -- host/digest.sh@80 -- # bs=131072 00:29:21.950 17:21:37 -- host/digest.sh@80 -- # qd=16 00:29:21.950 17:21:37 -- host/digest.sh@82 -- # bperfpid=660008 00:29:21.950 17:21:37 -- host/digest.sh@83 -- # waitforlisten 660008 /var/tmp/bperf.sock 00:29:21.950 17:21:37 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:21.950 17:21:37 -- common/autotest_common.sh@819 -- # '[' -z 660008 ']' 00:29:21.950 17:21:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.950 17:21:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:21.950 17:21:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.950 17:21:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:21.950 17:21:37 -- common/autotest_common.sh@10 -- # set +x 00:29:21.950 [2024-07-20 17:21:37.991348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:21.950 [2024-07-20 17:21:37.991439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660008 ] 00:29:21.950 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.950 Zero copy mechanism will not be used. 00:29:21.950 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.950 [2024-07-20 17:21:38.049742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.207 [2024-07-20 17:21:38.135054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.207 17:21:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:22.207 17:21:38 -- common/autotest_common.sh@852 -- # return 0 00:29:22.207 17:21:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:22.207 17:21:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:22.207 17:21:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:22.464 17:21:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.464 17:21:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.722 nvme0n1 00:29:22.722 17:21:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:22.722 17:21:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.984 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.984 Zero copy mechanism will not be used. 00:29:22.984 Running I/O for 2 seconds... 00:29:24.905 00:29:24.905 Latency(us) 00:29:24.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.905 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:24.905 nvme0n1 : 2.02 911.72 113.97 0.00 0.00 17470.87 7039.05 23787.14 00:29:24.905 =================================================================================================================== 00:29:24.905 Total : 911.72 113.97 0.00 0.00 17470.87 7039.05 23787.14 00:29:24.905 0 00:29:24.905 17:21:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:24.905 17:21:40 -- host/digest.sh@92 -- # get_accel_stats 00:29:24.905 17:21:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.905 17:21:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.905 17:21:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.905 | select(.opcode=="crc32c") 00:29:24.905 | "\(.module_name) \(.executed)"' 00:29:25.162 17:21:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:25.162 17:21:41 -- host/digest.sh@93 -- # exp_module=software 00:29:25.162 17:21:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:25.162 17:21:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:25.162 17:21:41 -- host/digest.sh@97 -- # killprocess 660008 00:29:25.162 17:21:41 -- common/autotest_common.sh@926 -- # '[' -z 660008 ']' 00:29:25.162 17:21:41 -- common/autotest_common.sh@930 -- # kill -0 660008 00:29:25.162 17:21:41 -- common/autotest_common.sh@931 -- # uname 00:29:25.162 17:21:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.162 17:21:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 660008 00:29:25.162 17:21:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:25.162 17:21:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:25.162 17:21:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 660008' 00:29:25.162 killing process with pid 660008 00:29:25.162 17:21:41 -- common/autotest_common.sh@945 -- # kill 660008 00:29:25.162 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.162 00:29:25.162 Latency(us) 00:29:25.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.162 =================================================================================================================== 00:29:25.162 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.163 17:21:41 -- common/autotest_common.sh@950 -- # wait 660008 00:29:25.421 17:21:41 -- host/digest.sh@126 -- # killprocess 658603 00:29:25.421 17:21:41 -- common/autotest_common.sh@926 -- # '[' -z 658603 ']' 00:29:25.421 17:21:41 -- common/autotest_common.sh@930 -- # kill -0 658603 00:29:25.421 17:21:41 -- common/autotest_common.sh@931 -- # uname 00:29:25.421 17:21:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:25.421 17:21:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 658603 00:29:25.421 17:21:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:25.421 17:21:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:25.421 17:21:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 658603' 00:29:25.421 killing process with pid 658603 00:29:25.421 17:21:41 -- common/autotest_common.sh@945 -- # kill 658603 00:29:25.421 17:21:41 -- common/autotest_common.sh@950 -- # wait 658603 00:29:25.679 00:29:25.679 real 0m15.153s 00:29:25.679 user 0m30.399s 00:29:25.679 sys 0m3.801s 00:29:25.679 17:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.679 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:25.679 ************************************ 00:29:25.679 END TEST nvmf_digest_clean 00:29:25.679 ************************************ 00:29:25.679 17:21:41 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:25.679 17:21:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:25.679 17:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.679 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:25.679 ************************************ 00:29:25.679 START TEST nvmf_digest_error 00:29:25.679 ************************************ 00:29:25.679 17:21:41 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:25.679 17:21:41 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:25.679 17:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:25.679 17:21:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:25.679 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:25.679 17:21:41 -- nvmf/common.sh@469 -- # nvmfpid=660573 00:29:25.679 17:21:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:25.679 17:21:41 -- nvmf/common.sh@470 -- # waitforlisten 660573 00:29:25.679 17:21:41 -- common/autotest_common.sh@819 -- # '[' -z 660573 ']' 00:29:25.679 17:21:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.679 17:21:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:25.679 17:21:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.679 17:21:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:25.679 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:25.679 [2024-07-20 17:21:41.731638] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:25.679 [2024-07-20 17:21:41.731714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.679 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.679 [2024-07-20 17:21:41.798830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.938 [2024-07-20 17:21:41.887996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:25.938 [2024-07-20 17:21:41.888157] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.938 [2024-07-20 17:21:41.888178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.938 [2024-07-20 17:21:41.888193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.938 [2024-07-20 17:21:41.888234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.938 17:21:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.938 17:21:41 -- common/autotest_common.sh@852 -- # return 0 00:29:25.938 17:21:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:25.938 17:21:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:25.938 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:25.938 17:21:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.938 17:21:41 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:25.938 17:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.938 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:25.938 [2024-07-20 17:21:41.988910] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:25.938 17:21:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.938 17:21:41 -- host/digest.sh@104 -- # common_target_config 00:29:25.938 17:21:41 -- host/digest.sh@43 -- # rpc_cmd 00:29:25.938 17:21:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.938 17:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:26.196 null0 00:29:26.196 [2024-07-20 17:21:42.107235] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.196 [2024-07-20 17:21:42.131471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.196 17:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.196 17:21:42 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:26.196 17:21:42 -- host/digest.sh@54 -- # local rw bs qd 00:29:26.196 17:21:42 -- host/digest.sh@56 -- # rw=randread 00:29:26.196 17:21:42 -- host/digest.sh@56 -- # bs=4096 00:29:26.196 17:21:42 -- host/digest.sh@56 -- # qd=128 00:29:26.196 17:21:42 -- host/digest.sh@58 -- # bperfpid=660598 00:29:26.196 17:21:42 -- host/digest.sh@60 -- # waitforlisten 660598 /var/tmp/bperf.sock 00:29:26.196 17:21:42 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:26.196 17:21:42 -- common/autotest_common.sh@819 -- # '[' -z 660598 ']' 00:29:26.196 17:21:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:26.196 17:21:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:26.196 17:21:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:26.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:26.196 17:21:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:26.196 17:21:42 -- common/autotest_common.sh@10 -- # set +x 00:29:26.196 [2024-07-20 17:21:42.175324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:26.196 [2024-07-20 17:21:42.175392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660598 ] 00:29:26.196 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.196 [2024-07-20 17:21:42.237308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.196 [2024-07-20 17:21:42.325778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.129 17:21:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:27.129 17:21:43 -- common/autotest_common.sh@852 -- # return 0 00:29:27.129 17:21:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.129 17:21:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:27.387 17:21:43 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:27.387 17:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.387 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:27.387 17:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.387 17:21:43 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.387 17:21:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.644 nvme0n1 00:29:27.644 17:21:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:27.644 17:21:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:27.644 17:21:43 -- common/autotest_common.sh@10 -- # set +x 00:29:27.644 17:21:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:27.644 17:21:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:27.644 17:21:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.902 Running I/O for 2 seconds... 00:29:27.902 [2024-07-20 17:21:43.841211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.841263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.841284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.856486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.856528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.856547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.867471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.867504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.867523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.881383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.881416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.881435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.893568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.893612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.893633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.906870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.906902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.906932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.919233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.919267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.919287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.931497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.931533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.931554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.944596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.944629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.944650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.956926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.956957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.956975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.968773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.968829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.968849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.981663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.981697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.981717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:43.993551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:43.993584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:43.993603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:44.004991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:44.005021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:44.005039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:44.021067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:44.021105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:44.021124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:44.033258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:44.033290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:44.033309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:44.047590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:44.047623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:44.047642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.902 [2024-07-20 17:21:44.058859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:27.902 [2024-07-20 17:21:44.058890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.902 [2024-07-20 17:21:44.058908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.069712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.069745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.069764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.084412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.084445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.084464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.096343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.096374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.096392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.107784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.107825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.107844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.118626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.118657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.118675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.130571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.130602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.130635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.141969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.142000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.142018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.153386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.153416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.153433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.165197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.165228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.165261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.175804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.175833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.175851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.188723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.188757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.188776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.200848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.200879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.200897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.212223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.212253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.212271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.224971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.225006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.225026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.235677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.235707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.235724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.247850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.247881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.247900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.259237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.259268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.259285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.272047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.272078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.272096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.283025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.283060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.283078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.295338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.295369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.295387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.307266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.307297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.307315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.160 [2024-07-20 17:21:44.317751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.160 [2024-07-20 17:21:44.317782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.160 [2024-07-20 17:21:44.317807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.329105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.329135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.329153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.339997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.340028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.340045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.351551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.351582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.351600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.363366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.363397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.363414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.374857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.374888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.374906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.386198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.386228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.386261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.398016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.398047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.398064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.409403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.409434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.409451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.420531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.420561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.420584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.432238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.432282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.432300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.417 [2024-07-20 17:21:44.445047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.417 [2024-07-20 17:21:44.445078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.417 [2024-07-20 17:21:44.445096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.455667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.455698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.455716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.467754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.467785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.467811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.479561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.479606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.479624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.490926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.490957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.490974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.502726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.502756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.502789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.515023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.515054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.515072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.526683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.526717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.526735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.538181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.538212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.538230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.549759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.549789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.549816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.562016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.562047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.562064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.418 [2024-07-20 17:21:44.573653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.418 [2024-07-20 17:21:44.573683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.418 [2024-07-20 17:21:44.573716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.584876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.584907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.584925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.597277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.597323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.597340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.609018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.609049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.609067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.620227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.620258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.620276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.631986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.632019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.632037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.643404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.643434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.643466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.654817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.654857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.654875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.666945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.666976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.666994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.678732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.678762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.678780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.690155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.675 [2024-07-20 17:21:44.690185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.675 [2024-07-20 17:21:44.690202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.675 [2024-07-20 17:21:44.701452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.701482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.701499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.713474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.713505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.713524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.725624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.725655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.725679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.737240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.737286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.737304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.748985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.749016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.749034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.760682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.760728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.760746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.772475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.772506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.772523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.783828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.783858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.783876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.795556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.795601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.795619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.807026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.807058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.807091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.819244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.819274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.819292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.676 [2024-07-20 17:21:44.831063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.676 [2024-07-20 17:21:44.831093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.676 [2024-07-20 17:21:44.831111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.933 [2024-07-20 17:21:44.842425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.933 [2024-07-20 17:21:44.842456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.933 [2024-07-20 17:21:44.842473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.933 [2024-07-20 17:21:44.853872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.933 [2024-07-20 17:21:44.853904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.853922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.867273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.867320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.867339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.878055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.878085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.878103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.890053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.890085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.890103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.901700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.901731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.901764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.913356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.913387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.913408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.924877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.924908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.924931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.935833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.935863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.935881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.947893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.947925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.947943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.959487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.959517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.959534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.970774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.970819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.970837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.982028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.982057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.982074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:44.994252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:44.994282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:44.994300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.005896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.005926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.005944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.017236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.017265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.017299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.029108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.029144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.029162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.041419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.041464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.041481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.052141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.052184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.052201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.064469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.064499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.064516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.075554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.075582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.075614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.934 [2024-07-20 17:21:45.087821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:28.934 [2024-07-20 17:21:45.087850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.934 [2024-07-20 17:21:45.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.099077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.099121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.099139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.110628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.110656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.110689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.122367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.122396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.122414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.134317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.134346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.134363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.145788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.145823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.145855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.157406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.157435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.157469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.169053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.169098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.169116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.181214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.181257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.181275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.192082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.192112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.192130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.205459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.205489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.205507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.216922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.216952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.216969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.229814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.229843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.229881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.240850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.240879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.240897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.253207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.253250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.253267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.264762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.264836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.276374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.276403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.276435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.287320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.287365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.287383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.299265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.299295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.299313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.310836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.310865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.310883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.322169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.322197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.191 [2024-07-20 17:21:45.322230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.191 [2024-07-20 17:21:45.333863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.191 [2024-07-20 17:21:45.333893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.192 [2024-07-20 17:21:45.333911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.192 [2024-07-20 17:21:45.346065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.192 [2024-07-20 17:21:45.346093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.192 [2024-07-20 17:21:45.346111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.357465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.357493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.368615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.368645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.368663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.380723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.380752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.380770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.392314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.392343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.392376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.403928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.403958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.403976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.415595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.415640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.427630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.427660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.427685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.439121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.439165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.439182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.450628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.450659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.450691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.462343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.462372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.462390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.474419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.474449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.474466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.486040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.486084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.486102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.497620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.497648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.497681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.509215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.509259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.509277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.521471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.521499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.521532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.532930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.532964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.532982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.544279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.544308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.544325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.557039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.557068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.557085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.568285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.568313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.568345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.579461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.579489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.579521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.591330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.591359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.591391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.448 [2024-07-20 17:21:45.603417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.448 [2024-07-20 17:21:45.603447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.448 [2024-07-20 17:21:45.603464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.705 [2024-07-20 17:21:45.614209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.705 [2024-07-20 17:21:45.614238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.705 [2024-07-20 17:21:45.614271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.705 [2024-07-20 17:21:45.626274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.705 [2024-07-20 17:21:45.626317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.705 [2024-07-20 17:21:45.626335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.705 [2024-07-20 17:21:45.638265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.705 [2024-07-20 17:21:45.638308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.638325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.649339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.649368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.649402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.662351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.662395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.662412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.673064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.673115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.673132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.686016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.686045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.686063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.696810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.696840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.696857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.710211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.710240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.710258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.721054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.721097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.721113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.733041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.733070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.733094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.745445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.745489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.745507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.756711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.756741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.756758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.766989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.767017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.767052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.780142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.780169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.780200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.791855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.791884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.791902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.803274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.803302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.803334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.814699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.814727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.814761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 [2024-07-20 17:21:45.826037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1979f10) 00:29:29.706 [2024-07-20 17:21:45.826065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.706 [2024-07-20 17:21:45.826082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.706 00:29:29.706 Latency(us) 00:29:29.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.706 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:29.706 nvme0n1 : 2.01 21508.35 84.02 0.00 0.00 5941.14 3106.89 16699.54 00:29:29.706 =================================================================================================================== 00:29:29.706 Total : 21508.35 84.02 0.00 0.00 5941.14 3106.89 16699.54 00:29:29.706 0 00:29:29.706 17:21:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:29.706 17:21:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:29.706 17:21:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:29.706 17:21:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:29.706 | .driver_specific 00:29:29.706 | .nvme_error 00:29:29.706 | .status_code 00:29:29.706 | .command_transient_transport_error' 00:29:29.963 17:21:46 -- host/digest.sh@71 -- # (( 169 > 0 )) 00:29:29.964 17:21:46 -- host/digest.sh@73 -- # killprocess 660598 00:29:29.964 17:21:46 -- common/autotest_common.sh@926 -- # '[' -z 660598 ']' 00:29:29.964 17:21:46 -- common/autotest_common.sh@930 -- # kill -0 660598 00:29:29.964 17:21:46 -- common/autotest_common.sh@931 -- # uname 00:29:29.964 17:21:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:29.964 17:21:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 660598 00:29:29.964 17:21:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:29.964 17:21:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:29.964 17:21:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 660598' 00:29:29.964 killing process with pid 660598 00:29:29.964 17:21:46 -- common/autotest_common.sh@945 -- # kill 660598 00:29:29.964 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.964 00:29:29.964 Latency(us) 00:29:29.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.964 =================================================================================================================== 00:29:29.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.964 17:21:46 -- common/autotest_common.sh@950 -- # wait 660598 00:29:30.221 17:21:46 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:30.221 17:21:46 -- host/digest.sh@54 -- # local rw bs qd 00:29:30.221 17:21:46 -- host/digest.sh@56 -- # rw=randread 00:29:30.221 17:21:46 -- host/digest.sh@56 -- # bs=131072 00:29:30.221 17:21:46 -- host/digest.sh@56 -- # qd=16 00:29:30.221 17:21:46 -- host/digest.sh@58 -- # bperfpid=661144 00:29:30.221 17:21:46 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:30.221 17:21:46 -- host/digest.sh@60 -- # waitforlisten 661144 /var/tmp/bperf.sock 00:29:30.221 17:21:46 -- common/autotest_common.sh@819 -- # '[' -z 661144 ']' 00:29:30.221 17:21:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.221 17:21:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:30.221 17:21:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.221 17:21:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:30.221 17:21:46 -- common/autotest_common.sh@10 -- # set +x 00:29:30.479 [2024-07-20 17:21:46.389112] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:30.479 [2024-07-20 17:21:46.389189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661144 ] 00:29:30.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.479 Zero copy mechanism will not be used. 00:29:30.479 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.479 [2024-07-20 17:21:46.451510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.479 [2024-07-20 17:21:46.538326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.423 17:21:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:31.423 17:21:47 -- common/autotest_common.sh@852 -- # return 0 00:29:31.423 17:21:47 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.423 17:21:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.423 17:21:47 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:31.423 17:21:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.423 17:21:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.423 17:21:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.423 17:21:47 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.423 17:21:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.987 nvme0n1 00:29:31.987 17:21:47 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:31.987 17:21:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.987 17:21:47 -- common/autotest_common.sh@10 -- # set +x 00:29:31.987 17:21:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.987 17:21:47 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:31.987 17:21:47 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.987 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.987 Zero copy mechanism will not be used. 00:29:31.987 Running I/O for 2 seconds... 00:29:31.987 [2024-07-20 17:21:48.058762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:31.987 [2024-07-20 17:21:48.058846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.987 [2024-07-20 17:21:48.058869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.987 [2024-07-20 17:21:48.079316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:31.987 [2024-07-20 17:21:48.079351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.987 [2024-07-20 17:21:48.079371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.987 [2024-07-20 17:21:48.099612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:31.987 [2024-07-20 17:21:48.099645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.987 [2024-07-20 17:21:48.099665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.988 [2024-07-20 17:21:48.119812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:31.988 [2024-07-20 17:21:48.119858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.988 [2024-07-20 17:21:48.119875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.988 [2024-07-20 17:21:48.140256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:31.988 [2024-07-20 17:21:48.140288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.988 [2024-07-20 17:21:48.140307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.245 [2024-07-20 17:21:48.160253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.245 [2024-07-20 17:21:48.160286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.245 [2024-07-20 17:21:48.160306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.245 [2024-07-20 17:21:48.180064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.245 [2024-07-20 17:21:48.180109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.245 [2024-07-20 17:21:48.180128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.245 [2024-07-20 17:21:48.200099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.245 [2024-07-20 17:21:48.200128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.245 [2024-07-20 17:21:48.200161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.245 [2024-07-20 17:21:48.219580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.245 [2024-07-20 17:21:48.219614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.245 [2024-07-20 17:21:48.219633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.238878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.238921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.238939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.258598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.258628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.258645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.278057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.278087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.278105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.297214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.297259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.297279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.316322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.316370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.316392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.335704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.335731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.335748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.355817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.355848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.355880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.375427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.375462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.375481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.246 [2024-07-20 17:21:48.394284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.246 [2024-07-20 17:21:48.394318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.246 [2024-07-20 17:21:48.394336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.412807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.412837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.412863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.431357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.431389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.431408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.449730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.449776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.449802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.468053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.468096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.468112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.486690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.486736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.486755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.505129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.505176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.505195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.523575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.523608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.523626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.542294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.542326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.542344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.560595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.560640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.560658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.579415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.579443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.579459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.598128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.598169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.598185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.616859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.616887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.616919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.636325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.636371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.636397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.504 [2024-07-20 17:21:48.655456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.504 [2024-07-20 17:21:48.655484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.504 [2024-07-20 17:21:48.655519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.674811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.674839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.674871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.693521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.693549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.693565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.712311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.712356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.712374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.731202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.731247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.731267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.749889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.749916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.749932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.768720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.768766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.768784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.787200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.787243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.787263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.805692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.805732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.805751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.824477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.824525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.824544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.842948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.842992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.843009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.861365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.762 [2024-07-20 17:21:48.861397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.762 [2024-07-20 17:21:48.861416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.762 [2024-07-20 17:21:48.879774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.763 [2024-07-20 17:21:48.879815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.763 [2024-07-20 17:21:48.879849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.763 [2024-07-20 17:21:48.898322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.763 [2024-07-20 17:21:48.898368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.763 [2024-07-20 17:21:48.898386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.763 [2024-07-20 17:21:48.916821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:32.763 [2024-07-20 17:21:48.916851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.763 [2024-07-20 17:21:48.916868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:48.935415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:48.935442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:48.935473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:48.954085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:48.954127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:48.954142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:48.972862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:48.972905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:48.972922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:48.991783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:48.991835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:48.991853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.010441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.010468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.010499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.029422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.029451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.029484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.048589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.048620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.048639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.067198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.067261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.085963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.086005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.086021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.104659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.104703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.104722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.123332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.123372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.123392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.141867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.141894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.141910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.160273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.160305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.160324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.021 [2024-07-20 17:21:49.178649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.021 [2024-07-20 17:21:49.178681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.021 [2024-07-20 17:21:49.178699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.279 [2024-07-20 17:21:49.197288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.279 [2024-07-20 17:21:49.197321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.279 [2024-07-20 17:21:49.197340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.279 [2024-07-20 17:21:49.215611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.279 [2024-07-20 17:21:49.215643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.279 [2024-07-20 17:21:49.215662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.279 [2024-07-20 17:21:49.234089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.279 [2024-07-20 17:21:49.234135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.279 [2024-07-20 17:21:49.234154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.279 [2024-07-20 17:21:49.252320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.279 [2024-07-20 17:21:49.252366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.279 [2024-07-20 17:21:49.252384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.279 [2024-07-20 17:21:49.270617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.279 [2024-07-20 17:21:49.270662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.279 [2024-07-20 17:21:49.270681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.279 [2024-07-20 17:21:49.289004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.279 [2024-07-20 17:21:49.289046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.279 [2024-07-20 17:21:49.289062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.307319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.307351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.307370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.325602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.325647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.325666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.343988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.344030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.344046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.362396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.362429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.362449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.380729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.380777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.380805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.399136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.399182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.399202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.417423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.417456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.417474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.280 [2024-07-20 17:21:49.435847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.280 [2024-07-20 17:21:49.435891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.280 [2024-07-20 17:21:49.435913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.454532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.454566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.454585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.473389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.473421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.473440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.491573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.491619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.491638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.509772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.509827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.509846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.528217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.528249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.528268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.546681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.546727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.546746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.565032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.565075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.565091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.583427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.583459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.583477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.602056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.602104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.538 [2024-07-20 17:21:49.602122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.538 [2024-07-20 17:21:49.620528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.538 [2024-07-20 17:21:49.620572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.539 [2024-07-20 17:21:49.620588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.539 [2024-07-20 17:21:49.638863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.539 [2024-07-20 17:21:49.638906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.539 [2024-07-20 17:21:49.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.539 [2024-07-20 17:21:49.657008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.539 [2024-07-20 17:21:49.657051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.539 [2024-07-20 17:21:49.657068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.539 [2024-07-20 17:21:49.675826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.539 [2024-07-20 17:21:49.675854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.539 [2024-07-20 17:21:49.675869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.539 [2024-07-20 17:21:49.694172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.539 [2024-07-20 17:21:49.694200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.539 [2024-07-20 17:21:49.694216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.712679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.712714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.712733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.731549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.731576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.731607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.750220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.750251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.750269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.768495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.768527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.768546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.786903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.786945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.786961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.805223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.805255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.805274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.823463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.823509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.823527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.841750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.841803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.841824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.860164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.860192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.860226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.878432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.878464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.878482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.896755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.896787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.896816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.915139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.915181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.915203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.933873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.933916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.933932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.797 [2024-07-20 17:21:49.952623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:33.797 [2024-07-20 17:21:49.952669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.797 [2024-07-20 17:21:49.952689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.056 [2024-07-20 17:21:49.971115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:34.056 [2024-07-20 17:21:49.971161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.057 [2024-07-20 17:21:49.971180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.057 [2024-07-20 17:21:49.989436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:34.057 [2024-07-20 17:21:49.989468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.057 [2024-07-20 17:21:49.989486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.057 [2024-07-20 17:21:50.007815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:34.057 [2024-07-20 17:21:50.007878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.057 [2024-07-20 17:21:50.007905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.057 [2024-07-20 17:21:50.026927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfc4de0) 00:29:34.057 [2024-07-20 17:21:50.026960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.057 [2024-07-20 17:21:50.026979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.057 00:29:34.057 Latency(us) 00:29:34.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.057 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:34.057 nvme0n1 : 2.00 1650.43 206.30 0.00 0.00 9690.65 8980.86 20777.34 00:29:34.057 =================================================================================================================== 00:29:34.057 Total : 1650.43 206.30 0.00 0.00 9690.65 8980.86 20777.34 00:29:34.057 0 00:29:34.057 17:21:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.057 17:21:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.057 17:21:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:34.057 17:21:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.057 | .driver_specific 00:29:34.057 | .nvme_error 00:29:34.057 | .status_code 00:29:34.057 | .command_transient_transport_error' 00:29:34.315 17:21:50 -- host/digest.sh@71 -- # (( 106 > 0 )) 00:29:34.315 17:21:50 -- host/digest.sh@73 -- # killprocess 661144 00:29:34.315 17:21:50 -- common/autotest_common.sh@926 -- # '[' -z 661144 ']' 00:29:34.315 17:21:50 -- common/autotest_common.sh@930 -- # kill -0 661144 00:29:34.315 17:21:50 -- common/autotest_common.sh@931 -- # uname 00:29:34.315 17:21:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:34.315 17:21:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 661144 00:29:34.315 17:21:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:34.315 17:21:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:34.315 17:21:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 661144' 00:29:34.315 killing process with pid 661144 00:29:34.315 17:21:50 -- common/autotest_common.sh@945 -- # kill 661144 00:29:34.315 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.315 00:29:34.315 Latency(us) 00:29:34.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.315 =================================================================================================================== 00:29:34.315 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.315 17:21:50 -- common/autotest_common.sh@950 -- # wait 661144 00:29:34.574 17:21:50 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:34.574 17:21:50 -- host/digest.sh@54 -- # local rw bs qd 00:29:34.574 17:21:50 -- host/digest.sh@56 -- # rw=randwrite 00:29:34.574 17:21:50 -- host/digest.sh@56 -- # bs=4096 00:29:34.574 17:21:50 -- host/digest.sh@56 -- # qd=128 00:29:34.574 17:21:50 -- host/digest.sh@58 -- # bperfpid=661587 00:29:34.574 17:21:50 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:34.574 17:21:50 -- host/digest.sh@60 -- # waitforlisten 661587 /var/tmp/bperf.sock 00:29:34.574 17:21:50 -- common/autotest_common.sh@819 -- # '[' -z 661587 ']' 00:29:34.574 17:21:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.574 17:21:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:34.574 17:21:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.574 17:21:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:34.574 17:21:50 -- common/autotest_common.sh@10 -- # set +x 00:29:34.574 [2024-07-20 17:21:50.614253] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:34.574 [2024-07-20 17:21:50.614341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661587 ] 00:29:34.574 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.574 [2024-07-20 17:21:50.680062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.832 [2024-07-20 17:21:50.773882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.399 17:21:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:35.399 17:21:51 -- common/autotest_common.sh@852 -- # return 0 00:29:35.399 17:21:51 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.400 17:21:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.658 17:21:51 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:35.658 17:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.658 17:21:51 -- common/autotest_common.sh@10 -- # set +x 00:29:35.658 17:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.658 17:21:51 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.658 17:21:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.222 nvme0n1 00:29:36.222 17:21:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:36.222 17:21:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.222 17:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:36.222 17:21:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.222 17:21:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:36.222 17:21:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:36.222 Running I/O for 2 seconds... 00:29:36.222 [2024-07-20 17:21:52.355033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.222 [2024-07-20 17:21:52.355410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.222 [2024-07-20 17:21:52.355463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.222 [2024-07-20 17:21:52.367812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.222 [2024-07-20 17:21:52.368175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.222 [2024-07-20 17:21:52.368219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.222 [2024-07-20 17:21:52.380625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.222 [2024-07-20 17:21:52.380979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.222 [2024-07-20 17:21:52.381008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.480 [2024-07-20 17:21:52.393202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.480 [2024-07-20 17:21:52.393571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.480 [2024-07-20 17:21:52.393600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.480 [2024-07-20 17:21:52.406026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.480 [2024-07-20 17:21:52.406385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.480 [2024-07-20 17:21:52.406412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.418831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.419185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.419231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.431435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.431841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.431870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.444220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.444594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.444621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.456950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.457296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.457324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.469516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.469872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.469900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.482078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.482451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.482478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.494838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.495171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.495214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.507391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.507854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.507882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.520120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.520486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.520513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.532864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.533197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.533225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.545476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.545834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.545862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.557967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.558319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.558361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.570696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.571055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.571084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.583327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.583687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.583714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.595944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.596276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.596319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.608693] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.609050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.609079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.621177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.621541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.621569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.481 [2024-07-20 17:21:52.633671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.481 [2024-07-20 17:21:52.634017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.481 [2024-07-20 17:21:52.634045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.646162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.646529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.646556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.658829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.659165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.659200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.671383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.671732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.671775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.683969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.684300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.684329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.696548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.696902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.696930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.709115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.709474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.709501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.721654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.722003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.722031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.734049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.734401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.734445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.746602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.747006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.747034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.759273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.759635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.759661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.772178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.772568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.772613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.784694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.785061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.785089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.797330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.797696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.797723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.809974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.810312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.810339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.822540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.822907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.822950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.835394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.835756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.835783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.739 [2024-07-20 17:21:52.848068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.739 [2024-07-20 17:21:52.848441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.739 [2024-07-20 17:21:52.848468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.740 [2024-07-20 17:21:52.860685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.740 [2024-07-20 17:21:52.861020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.740 [2024-07-20 17:21:52.861048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.740 [2024-07-20 17:21:52.873326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.740 [2024-07-20 17:21:52.873705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.740 [2024-07-20 17:21:52.873732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.740 [2024-07-20 17:21:52.886062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.740 [2024-07-20 17:21:52.886399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.740 [2024-07-20 17:21:52.886427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.996 [2024-07-20 17:21:52.898596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.996 [2024-07-20 17:21:52.898974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.996 [2024-07-20 17:21:52.899001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.996 [2024-07-20 17:21:52.911265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.996 [2024-07-20 17:21:52.911633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.996 [2024-07-20 17:21:52.911660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.996 [2024-07-20 17:21:52.923761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.996 [2024-07-20 17:21:52.924153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:52.924195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:52.936469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:52.936832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:52.936861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:52.949383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:52.949744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:52.949770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:52.962073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:52.962437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:52.962464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:52.974678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:52.975029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:52.975056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:52.987313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:52.987660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:52.987709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.000001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.000346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.000388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.012590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.012941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.012969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.025210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.025571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.037789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.038145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.038172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.050317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.050674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.050701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.062925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.063258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.063285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.075446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.075818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.075845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.087976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.088313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.088354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.100568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.100960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.100988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.113117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.113483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.113511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.125683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.126029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.126057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.138243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.138628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.138669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:36.997 [2024-07-20 17:21:53.150769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:36.997 [2024-07-20 17:21:53.151124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.997 [2024-07-20 17:21:53.151152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.163327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.163675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.163719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.175936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.176270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.176298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.188652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.189023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.189051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.201294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.201668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.201711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.214053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.214402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.214429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.226740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.227098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.227126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.239290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.239633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.239661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.254 [2024-07-20 17:21:53.251669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.254 [2024-07-20 17:21:53.252012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.254 [2024-07-20 17:21:53.252040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.264061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.264407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.264434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.276404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.276761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.276810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.288815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.289198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.289224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.301174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.301517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.301544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.313461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.313893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.313940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.325755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.326129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.326155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.338077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.338503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.338529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.350380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.350737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.350764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.362751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.363107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.363134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.375112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.375454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.375481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.387464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.387817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.387845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.399656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.400003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.400030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.255 [2024-07-20 17:21:53.412038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.255 [2024-07-20 17:21:53.412437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.255 [2024-07-20 17:21:53.412464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.424444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.424840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.424867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.436914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.437263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.437304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.449343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.449686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.449716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.461722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.462077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.462108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.474053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.474415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.474443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.486376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.486725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.486753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.498739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.499105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.499133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.511018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.511449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.511489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.523404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.523751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.523778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.535718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.536071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.536099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.548225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.548648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.548673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.560496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.560862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.560888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.572758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.573110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.573137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.585126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.585488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.585533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.597417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.597771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.597820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.609710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.610066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.610109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.512 [2024-07-20 17:21:53.622199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.512 [2024-07-20 17:21:53.622543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.512 [2024-07-20 17:21:53.622571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.513 [2024-07-20 17:21:53.634512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.513 [2024-07-20 17:21:53.634965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.513 [2024-07-20 17:21:53.634998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.513 [2024-07-20 17:21:53.646681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.513 [2024-07-20 17:21:53.647025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.513 [2024-07-20 17:21:53.647053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.513 [2024-07-20 17:21:53.659145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.513 [2024-07-20 17:21:53.659488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.513 [2024-07-20 17:21:53.659515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.671525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.671904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.671932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.683817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.684153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.684180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.696181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.696537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.696564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.708469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.708855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.708882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.720821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.721198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.721224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.733172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.733522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.733564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.745383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.745767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.745801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.757739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.758123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.758149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.770110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.770488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.770516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.782622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.782994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.783022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.795078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.795436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.795463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.807494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.807876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.807918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.819847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.820275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.820301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.832178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.832535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.832561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.844943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.845289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.845330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.857591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.857935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.857963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.870433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.870821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.870848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.883623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.884000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.884028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.896118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.896473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.896515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.908639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.908984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.909012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:37.770 [2024-07-20 17:21:53.921036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:37.770 [2024-07-20 17:21:53.921396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.770 [2024-07-20 17:21:53.921423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:53.933392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:53.933736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:53.933763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:53.945683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:53.946025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:53.946053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:53.958061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:53.958426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:53.958457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:53.970470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:53.970823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:53.970851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:53.982736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:53.983097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:53.983125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:53.995078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:53.995437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:53.995464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.007588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.007947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.007974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.020165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.020512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.020539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.032504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.032861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.032889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.044898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.045257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.045298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.057480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.057830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.069769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.070224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.082249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.082597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.082624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.094595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.095007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.095049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.107116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.107521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.119555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.119917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.119960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.131921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.132263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.132305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.144356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.144697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.144724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.156523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.156874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.156901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.168844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.169175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.169202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.027 [2024-07-20 17:21:54.181123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.027 [2024-07-20 17:21:54.181467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.027 [2024-07-20 17:21:54.181494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.193559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.193920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.193948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.205872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.206230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.218247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.218595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.218622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.230510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.230865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.230892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.242901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.243263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.243304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.255323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.255699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.255727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.267671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.268019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.268046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.280079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.280423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.280450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.292510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.292860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.292887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.304926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.305269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.305296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.317256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.317611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.317638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.329607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.329996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.330022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 [2024-07-20 17:21:54.342019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d260) with pdu=0x2000190fc560 00:29:38.285 [2024-07-20 17:21:54.342365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.285 [2024-07-20 17:21:54.342392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:38.285 00:29:38.285 Latency(us) 00:29:38.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.285 nvme0n1 : 2.01 20418.66 79.76 0.00 0.00 6256.52 3301.07 13204.29 00:29:38.285 =================================================================================================================== 00:29:38.285 Total : 20418.66 79.76 0.00 0.00 6256.52 3301.07 13204.29 00:29:38.285 0 00:29:38.285 17:21:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.285 17:21:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.285 17:21:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.285 17:21:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.285 | .driver_specific 00:29:38.285 | .nvme_error 00:29:38.285 | .status_code 00:29:38.285 | .command_transient_transport_error' 00:29:38.543 17:21:54 -- host/digest.sh@71 -- # (( 160 > 0 )) 00:29:38.543 17:21:54 -- host/digest.sh@73 -- # killprocess 661587 00:29:38.543 17:21:54 -- common/autotest_common.sh@926 -- # '[' -z 661587 ']' 00:29:38.543 17:21:54 -- common/autotest_common.sh@930 -- # kill -0 661587 00:29:38.543 17:21:54 -- common/autotest_common.sh@931 -- # uname 00:29:38.543 17:21:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:38.543 17:21:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 661587 00:29:38.543 17:21:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:38.543 17:21:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:38.543 17:21:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 661587' 00:29:38.543 killing process with pid 661587 00:29:38.543 17:21:54 -- common/autotest_common.sh@945 -- # kill 661587 00:29:38.543 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.543 00:29:38.543 Latency(us) 00:29:38.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.543 =================================================================================================================== 00:29:38.543 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.543 17:21:54 -- common/autotest_common.sh@950 -- # wait 661587 00:29:38.800 17:21:54 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:38.800 17:21:54 -- host/digest.sh@54 -- # local rw bs qd 00:29:38.800 17:21:54 -- host/digest.sh@56 -- # rw=randwrite 00:29:38.800 17:21:54 -- host/digest.sh@56 -- # bs=131072 00:29:38.800 17:21:54 -- host/digest.sh@56 -- # qd=16 00:29:38.800 17:21:54 -- host/digest.sh@58 -- # bperfpid=662132 00:29:38.800 17:21:54 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:38.800 17:21:54 -- host/digest.sh@60 -- # waitforlisten 662132 /var/tmp/bperf.sock 00:29:38.800 17:21:54 -- common/autotest_common.sh@819 -- # '[' -z 662132 ']' 00:29:38.800 17:21:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.800 17:21:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:38.800 17:21:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.800 17:21:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:38.800 17:21:54 -- common/autotest_common.sh@10 -- # set +x 00:29:38.800 [2024-07-20 17:21:54.882476] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:38.800 [2024-07-20 17:21:54.882548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662132 ] 00:29:38.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.800 Zero copy mechanism will not be used. 00:29:38.800 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.800 [2024-07-20 17:21:54.945212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.058 [2024-07-20 17:21:55.033945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.991 17:21:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:39.991 17:21:55 -- common/autotest_common.sh@852 -- # return 0 00:29:39.991 17:21:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.991 17:21:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.991 17:21:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:39.991 17:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.991 17:21:56 -- common/autotest_common.sh@10 -- # set +x 00:29:39.991 17:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.991 17:21:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.991 17:21:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.249 nvme0n1 00:29:40.249 17:21:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:40.249 17:21:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.249 17:21:56 -- common/autotest_common.sh@10 -- # set +x 00:29:40.249 17:21:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.249 17:21:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:40.249 17:21:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.506 Zero copy mechanism will not be used. 00:29:40.506 Running I/O for 2 seconds... 00:29:40.506 [2024-07-20 17:21:56.542488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.506 [2024-07-20 17:21:56.542974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.506 [2024-07-20 17:21:56.543013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.506 [2024-07-20 17:21:56.573566] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.506 [2024-07-20 17:21:56.574432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.506 [2024-07-20 17:21:56.574477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.506 [2024-07-20 17:21:56.606645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.506 [2024-07-20 17:21:56.607523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.506 [2024-07-20 17:21:56.607553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.506 [2024-07-20 17:21:56.638289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.506 [2024-07-20 17:21:56.639131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.506 [2024-07-20 17:21:56.639160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.671786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.672880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.672910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.705552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.706455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.706484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.740717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.741583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.741612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.776686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.777744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.777787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.808331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.809416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.809447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.844357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.845478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.845507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.877702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.878404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.878446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.763 [2024-07-20 17:21:56.912862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:40.763 [2024-07-20 17:21:56.913568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.763 [2024-07-20 17:21:56.913597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:56.944628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:56.945459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:56.945488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:56.976752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:56.977882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:56.977912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:57.013361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:57.014468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:57.014512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:57.047656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:57.048785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:57.048834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:57.078741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:57.079472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:57.079513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:57.107034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:57.108015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:57.108043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:57.144158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:57.144752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:57.144780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.021 [2024-07-20 17:21:57.177463] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.021 [2024-07-20 17:21:57.178557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.021 [2024-07-20 17:21:57.178586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.213735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.214728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.214758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.244350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.245321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.245349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.279786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.280780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.280814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.312501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.313539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.313582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.347605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.348565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.348594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.380978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.381879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.381907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.278 [2024-07-20 17:21:57.415334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.278 [2024-07-20 17:21:57.416212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.278 [2024-07-20 17:21:57.416241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.449158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.450147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.450190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.484468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.485301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.485330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.519672] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.520662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.520706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.555217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.555928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.555959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.587779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.588867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.588897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.625039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.626041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.626073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.536 [2024-07-20 17:21:57.661570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.536 [2024-07-20 17:21:57.662575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.536 [2024-07-20 17:21:57.662602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.793 [2024-07-20 17:21:57.695202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.793 [2024-07-20 17:21:57.696064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.793 [2024-07-20 17:21:57.696092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.793 [2024-07-20 17:21:57.731560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.793 [2024-07-20 17:21:57.732670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.793 [2024-07-20 17:21:57.732698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.793 [2024-07-20 17:21:57.767919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.793 [2024-07-20 17:21:57.768634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.793 [2024-07-20 17:21:57.768661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.793 [2024-07-20 17:21:57.799899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.794 [2024-07-20 17:21:57.800736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.794 [2024-07-20 17:21:57.800764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.794 [2024-07-20 17:21:57.830556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.794 [2024-07-20 17:21:57.831148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.794 [2024-07-20 17:21:57.831177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.794 [2024-07-20 17:21:57.864207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.794 [2024-07-20 17:21:57.864934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.794 [2024-07-20 17:21:57.864963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.794 [2024-07-20 17:21:57.900979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.794 [2024-07-20 17:21:57.902035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.794 [2024-07-20 17:21:57.902064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.794 [2024-07-20 17:21:57.933804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:41.794 [2024-07-20 17:21:57.934953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.794 [2024-07-20 17:21:57.934997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.051 [2024-07-20 17:21:57.968891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:57.969868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:57.969902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.052 [2024-07-20 17:21:58.004262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:58.005278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:58.005306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.052 [2024-07-20 17:21:58.040554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:58.041459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:58.041487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.052 [2024-07-20 17:21:58.075146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:58.075978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:58.076006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.052 [2024-07-20 17:21:58.109588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:58.110160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:58.110188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.052 [2024-07-20 17:21:58.144503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:58.145778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:58.145827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.052 [2024-07-20 17:21:58.177803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.052 [2024-07-20 17:21:58.178721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.052 [2024-07-20 17:21:58.178749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.309 [2024-07-20 17:21:58.214587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.309 [2024-07-20 17:21:58.215324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.309 [2024-07-20 17:21:58.215353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.309 [2024-07-20 17:21:58.250917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.309 [2024-07-20 17:21:58.252020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.309 [2024-07-20 17:21:58.252049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.309 [2024-07-20 17:21:58.287960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.309 [2024-07-20 17:21:58.288965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.309 [2024-07-20 17:21:58.288994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.309 [2024-07-20 17:21:58.324303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.309 [2024-07-20 17:21:58.325416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.309 [2024-07-20 17:21:58.325443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.309 [2024-07-20 17:21:58.359882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.309 [2024-07-20 17:21:58.360338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.309 [2024-07-20 17:21:58.360366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.309 [2024-07-20 17:21:58.388620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.309 [2024-07-20 17:21:58.389614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.310 [2024-07-20 17:21:58.389643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.310 [2024-07-20 17:21:58.423355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.310 [2024-07-20 17:21:58.424308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.310 [2024-07-20 17:21:58.424337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.310 [2024-07-20 17:21:58.459263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.310 [2024-07-20 17:21:58.460109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.310 [2024-07-20 17:21:58.460138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.567 [2024-07-20 17:21:58.492644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x186d390) with pdu=0x2000190fef90 00:29:42.567 [2024-07-20 17:21:58.493680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.567 [2024-07-20 17:21:58.493708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.567 00:29:42.567 Latency(us) 00:29:42.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.567 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:42.567 nvme0n1 : 2.02 906.68 113.33 0.00 0.00 17567.37 8980.86 38253.61 00:29:42.567 =================================================================================================================== 00:29:42.567 Total : 906.68 113.33 0.00 0.00 17567.37 8980.86 38253.61 00:29:42.567 0 00:29:42.567 17:21:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:42.567 17:21:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:42.567 17:21:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:42.567 17:21:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:42.567 | .driver_specific 00:29:42.567 | .nvme_error 00:29:42.567 | .status_code 00:29:42.567 | .command_transient_transport_error' 00:29:42.825 17:21:58 -- host/digest.sh@71 -- # (( 58 > 0 )) 00:29:42.825 17:21:58 -- host/digest.sh@73 -- # killprocess 662132 00:29:42.825 17:21:58 -- common/autotest_common.sh@926 -- # '[' -z 662132 ']' 00:29:42.825 17:21:58 -- common/autotest_common.sh@930 -- # kill -0 662132 00:29:42.825 17:21:58 -- common/autotest_common.sh@931 -- # uname 00:29:42.825 17:21:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:42.825 17:21:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 662132 00:29:42.825 17:21:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:42.825 17:21:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:42.825 17:21:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 662132' 00:29:42.825 killing process with pid 662132 00:29:42.825 17:21:58 -- common/autotest_common.sh@945 -- # kill 662132 00:29:42.825 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.825 00:29:42.825 Latency(us) 00:29:42.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.825 =================================================================================================================== 00:29:42.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.825 17:21:58 -- common/autotest_common.sh@950 -- # wait 662132 00:29:43.082 17:21:59 -- host/digest.sh@115 -- # killprocess 660573 00:29:43.082 17:21:59 -- common/autotest_common.sh@926 -- # '[' -z 660573 ']' 00:29:43.082 17:21:59 -- common/autotest_common.sh@930 -- # kill -0 660573 00:29:43.082 17:21:59 -- common/autotest_common.sh@931 -- # uname 00:29:43.082 17:21:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:43.082 17:21:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 660573 00:29:43.082 17:21:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:43.082 17:21:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:43.082 17:21:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 660573' 00:29:43.082 killing process with pid 660573 00:29:43.082 17:21:59 -- common/autotest_common.sh@945 -- # kill 660573 00:29:43.082 17:21:59 -- common/autotest_common.sh@950 -- # wait 660573 00:29:43.340 00:29:43.340 real 0m17.605s 00:29:43.340 user 0m36.191s 00:29:43.340 sys 0m3.811s 00:29:43.340 17:21:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.340 17:21:59 -- common/autotest_common.sh@10 -- # set +x 00:29:43.340 ************************************ 00:29:43.340 END TEST nvmf_digest_error 00:29:43.340 ************************************ 00:29:43.340 17:21:59 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:43.340 17:21:59 -- host/digest.sh@139 -- # nvmftestfini 00:29:43.340 17:21:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:43.340 17:21:59 -- nvmf/common.sh@116 -- # sync 00:29:43.340 17:21:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:43.340 17:21:59 -- nvmf/common.sh@119 -- # set +e 00:29:43.340 17:21:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:43.340 17:21:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:43.340 rmmod nvme_tcp 00:29:43.340 rmmod nvme_fabrics 00:29:43.340 rmmod nvme_keyring 00:29:43.340 17:21:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:43.340 17:21:59 -- nvmf/common.sh@123 -- # set -e 00:29:43.340 17:21:59 -- nvmf/common.sh@124 -- # return 0 00:29:43.340 17:21:59 -- nvmf/common.sh@477 -- # '[' -n 660573 ']' 00:29:43.340 17:21:59 -- nvmf/common.sh@478 -- # killprocess 660573 00:29:43.340 17:21:59 -- common/autotest_common.sh@926 -- # '[' -z 660573 ']' 00:29:43.340 17:21:59 -- common/autotest_common.sh@930 -- # kill -0 660573 00:29:43.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (660573) - No such process 00:29:43.340 17:21:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 660573 is not found' 00:29:43.340 Process with pid 660573 is not found 00:29:43.340 17:21:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:43.340 17:21:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:43.340 17:21:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:43.340 17:21:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.340 17:21:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:43.340 17:21:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.340 17:21:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.340 17:21:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.238 17:22:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:45.497 00:29:45.497 real 0m37.074s 00:29:45.497 user 1m7.405s 00:29:45.497 sys 0m9.119s 00:29:45.497 17:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.497 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:29:45.497 ************************************ 00:29:45.497 END TEST nvmf_digest 00:29:45.497 ************************************ 00:29:45.497 17:22:01 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:45.497 17:22:01 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:45.497 17:22:01 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:45.497 17:22:01 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:45.497 17:22:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:45.497 17:22:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:45.497 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:29:45.497 ************************************ 00:29:45.497 START TEST nvmf_bdevperf 00:29:45.497 ************************************ 00:29:45.497 17:22:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:45.497 * Looking for test storage... 00:29:45.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.497 17:22:01 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.497 17:22:01 -- nvmf/common.sh@7 -- # uname -s 00:29:45.497 17:22:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.497 17:22:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.497 17:22:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.497 17:22:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.497 17:22:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.497 17:22:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.497 17:22:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.497 17:22:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.497 17:22:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.497 17:22:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.497 17:22:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.497 17:22:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.497 17:22:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.497 17:22:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.497 17:22:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.497 17:22:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.497 17:22:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.497 17:22:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.497 17:22:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.497 17:22:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.497 17:22:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.497 17:22:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.497 17:22:01 -- paths/export.sh@5 -- # export PATH 00:29:45.497 17:22:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.497 17:22:01 -- nvmf/common.sh@46 -- # : 0 00:29:45.497 17:22:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:45.497 17:22:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:45.497 17:22:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:45.497 17:22:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.497 17:22:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.497 17:22:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:45.497 17:22:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:45.497 17:22:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:45.497 17:22:01 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:45.497 17:22:01 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:45.497 17:22:01 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:45.497 17:22:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:45.497 17:22:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.497 17:22:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:45.497 17:22:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:45.497 17:22:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:45.497 17:22:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.497 17:22:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:45.497 17:22:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.497 17:22:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:45.497 17:22:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:45.497 17:22:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:45.497 17:22:01 -- common/autotest_common.sh@10 -- # set +x 00:29:47.394 17:22:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:47.394 17:22:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:47.394 17:22:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:47.394 17:22:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:47.394 17:22:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:47.394 17:22:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:47.394 17:22:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:47.394 17:22:03 -- nvmf/common.sh@294 -- # net_devs=() 00:29:47.394 17:22:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:47.394 17:22:03 -- nvmf/common.sh@295 -- # e810=() 00:29:47.394 17:22:03 -- nvmf/common.sh@295 -- # local -ga e810 00:29:47.394 17:22:03 -- nvmf/common.sh@296 -- # x722=() 00:29:47.394 17:22:03 -- nvmf/common.sh@296 -- # local -ga x722 00:29:47.394 17:22:03 -- nvmf/common.sh@297 -- # mlx=() 00:29:47.394 17:22:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:47.394 17:22:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.394 17:22:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:47.394 17:22:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:47.394 17:22:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:47.394 17:22:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:47.394 17:22:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:47.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:47.394 17:22:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:47.394 17:22:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:47.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:47.394 17:22:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:47.394 17:22:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:47.394 17:22:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.394 17:22:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:47.394 17:22:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.394 17:22:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:47.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:47.394 17:22:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.394 17:22:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:47.394 17:22:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.394 17:22:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:47.394 17:22:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.394 17:22:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:47.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:47.394 17:22:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.394 17:22:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:47.394 17:22:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:47.394 17:22:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:47.394 17:22:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.394 17:22:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.394 17:22:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.394 17:22:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:47.394 17:22:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.394 17:22:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.394 17:22:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:47.394 17:22:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.394 17:22:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.394 17:22:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:47.394 17:22:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:47.394 17:22:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.394 17:22:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.394 17:22:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.394 17:22:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.394 17:22:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:47.394 17:22:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.394 17:22:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.394 17:22:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.394 17:22:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:47.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:29:47.394 00:29:47.394 --- 10.0.0.2 ping statistics --- 00:29:47.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.394 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:29:47.394 17:22:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:29:47.394 00:29:47.394 --- 10.0.0.1 ping statistics --- 00:29:47.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.394 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:29:47.394 17:22:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.394 17:22:03 -- nvmf/common.sh@410 -- # return 0 00:29:47.394 17:22:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:47.394 17:22:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.394 17:22:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:47.394 17:22:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.394 17:22:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:47.394 17:22:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:47.653 17:22:03 -- host/bdevperf.sh@25 -- # tgt_init 00:29:47.653 17:22:03 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:47.653 17:22:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:47.653 17:22:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:47.653 17:22:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.653 17:22:03 -- nvmf/common.sh@469 -- # nvmfpid=664644 00:29:47.653 17:22:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:47.653 17:22:03 -- nvmf/common.sh@470 -- # waitforlisten 664644 00:29:47.653 17:22:03 -- common/autotest_common.sh@819 -- # '[' -z 664644 ']' 00:29:47.653 17:22:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.653 17:22:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:47.653 17:22:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.653 17:22:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:47.653 17:22:03 -- common/autotest_common.sh@10 -- # set +x 00:29:47.653 [2024-07-20 17:22:03.617627] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:47.653 [2024-07-20 17:22:03.617708] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.653 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.653 [2024-07-20 17:22:03.689279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.653 [2024-07-20 17:22:03.780121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:47.653 [2024-07-20 17:22:03.780280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.653 [2024-07-20 17:22:03.780298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.653 [2024-07-20 17:22:03.780321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.653 [2024-07-20 17:22:03.780526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.653 [2024-07-20 17:22:03.780585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.653 [2024-07-20 17:22:03.780588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.590 17:22:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:48.590 17:22:04 -- common/autotest_common.sh@852 -- # return 0 00:29:48.590 17:22:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:48.590 17:22:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:48.590 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.590 17:22:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.590 17:22:04 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.590 17:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.590 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.590 [2024-07-20 17:22:04.586408] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.590 17:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.590 17:22:04 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.590 17:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.590 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.590 Malloc0 00:29:48.590 17:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.590 17:22:04 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.590 17:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.590 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.590 17:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.590 17:22:04 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.590 17:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.590 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.590 17:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.590 17:22:04 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.590 17:22:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:48.590 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:29:48.590 [2024-07-20 17:22:04.651976] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.590 17:22:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:48.590 17:22:04 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:48.590 17:22:04 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:48.590 17:22:04 -- nvmf/common.sh@520 -- # config=() 00:29:48.590 17:22:04 -- nvmf/common.sh@520 -- # local subsystem config 00:29:48.590 17:22:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:48.590 17:22:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:48.590 { 00:29:48.590 "params": { 00:29:48.590 "name": "Nvme$subsystem", 00:29:48.590 "trtype": "$TEST_TRANSPORT", 00:29:48.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.590 "adrfam": "ipv4", 00:29:48.590 "trsvcid": "$NVMF_PORT", 00:29:48.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.590 "hdgst": ${hdgst:-false}, 00:29:48.590 "ddgst": ${ddgst:-false} 00:29:48.590 }, 00:29:48.590 "method": "bdev_nvme_attach_controller" 00:29:48.590 } 00:29:48.590 EOF 00:29:48.590 )") 00:29:48.590 17:22:04 -- nvmf/common.sh@542 -- # cat 00:29:48.590 17:22:04 -- nvmf/common.sh@544 -- # jq . 00:29:48.590 17:22:04 -- nvmf/common.sh@545 -- # IFS=, 00:29:48.590 17:22:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:48.590 "params": { 00:29:48.590 "name": "Nvme1", 00:29:48.590 "trtype": "tcp", 00:29:48.590 "traddr": "10.0.0.2", 00:29:48.590 "adrfam": "ipv4", 00:29:48.590 "trsvcid": "4420", 00:29:48.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.590 "hdgst": false, 00:29:48.590 "ddgst": false 00:29:48.590 }, 00:29:48.590 "method": "bdev_nvme_attach_controller" 00:29:48.590 }' 00:29:48.590 [2024-07-20 17:22:04.697683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:48.590 [2024-07-20 17:22:04.697749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664806 ] 00:29:48.590 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.848 [2024-07-20 17:22:04.757965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.848 [2024-07-20 17:22:04.846243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.105 Running I/O for 1 seconds... 00:29:50.037 00:29:50.037 Latency(us) 00:29:50.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.037 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:50.037 Verification LBA range: start 0x0 length 0x4000 00:29:50.037 Nvme1n1 : 1.01 13289.39 51.91 0.00 0.00 9588.78 1456.36 15049.01 00:29:50.037 =================================================================================================================== 00:29:50.037 Total : 13289.39 51.91 0.00 0.00 9588.78 1456.36 15049.01 00:29:50.295 17:22:06 -- host/bdevperf.sh@30 -- # bdevperfpid=664957 00:29:50.295 17:22:06 -- host/bdevperf.sh@32 -- # sleep 3 00:29:50.295 17:22:06 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:50.295 17:22:06 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:50.295 17:22:06 -- nvmf/common.sh@520 -- # config=() 00:29:50.295 17:22:06 -- nvmf/common.sh@520 -- # local subsystem config 00:29:50.295 17:22:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:50.295 17:22:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:50.295 { 00:29:50.295 "params": { 00:29:50.295 "name": "Nvme$subsystem", 00:29:50.295 "trtype": "$TEST_TRANSPORT", 00:29:50.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.295 "adrfam": "ipv4", 00:29:50.295 "trsvcid": "$NVMF_PORT", 00:29:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.295 "hdgst": ${hdgst:-false}, 00:29:50.295 "ddgst": ${ddgst:-false} 00:29:50.295 }, 00:29:50.295 "method": "bdev_nvme_attach_controller" 00:29:50.295 } 00:29:50.295 EOF 00:29:50.295 )") 00:29:50.295 17:22:06 -- nvmf/common.sh@542 -- # cat 00:29:50.295 17:22:06 -- nvmf/common.sh@544 -- # jq . 00:29:50.295 17:22:06 -- nvmf/common.sh@545 -- # IFS=, 00:29:50.295 17:22:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:50.295 "params": { 00:29:50.295 "name": "Nvme1", 00:29:50.295 "trtype": "tcp", 00:29:50.295 "traddr": "10.0.0.2", 00:29:50.295 "adrfam": "ipv4", 00:29:50.295 "trsvcid": "4420", 00:29:50.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:50.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:50.295 "hdgst": false, 00:29:50.295 "ddgst": false 00:29:50.295 }, 00:29:50.295 "method": "bdev_nvme_attach_controller" 00:29:50.295 }' 00:29:50.295 [2024-07-20 17:22:06.320145] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:50.295 [2024-07-20 17:22:06.320236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664957 ] 00:29:50.295 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.295 [2024-07-20 17:22:06.380890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.552 [2024-07-20 17:22:06.465453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.810 Running I/O for 15 seconds... 00:29:53.341 17:22:09 -- host/bdevperf.sh@33 -- # kill -9 664644 00:29:53.341 17:22:09 -- host/bdevperf.sh@35 -- # sleep 3 00:29:53.341 [2024-07-20 17:22:09.296608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.341 [2024-07-20 17:22:09.296973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.341 [2024-07-20 17:22:09.296989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.297949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.297978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.297994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.298159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.298231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.298264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.342 [2024-07-20 17:22:09.298362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.342 [2024-07-20 17:22:09.298626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.342 [2024-07-20 17:22:09.298648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.298664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.298697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.298731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.298764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.298806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.298861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.298893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.298923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.298953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.298972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.298986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.299399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.299432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.299885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.299914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.299947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.299978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.299993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.300007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.300036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.300065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.300115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.300147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.300180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.343 [2024-07-20 17:22:09.300213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.300246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.300278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.300311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.343 [2024-07-20 17:22:09.300328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.343 [2024-07-20 17:22:09.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.344 [2024-07-20 17:22:09.300384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.344 [2024-07-20 17:22:09.300616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.344 [2024-07-20 17:22:09.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:53.344 [2024-07-20 17:22:09.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.300985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.300999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.301015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.301029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.301044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.344 [2024-07-20 17:22:09.301058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.301089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb1590 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.301110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:53.344 [2024-07-20 17:22:09.301123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:53.344 [2024-07-20 17:22:09.301137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20392 len:8 PRP1 0x0 PRP2 0x0 00:29:53.344 [2024-07-20 17:22:09.301152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.344 [2024-07-20 17:22:09.301227] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcb1590 was disconnected and freed. reset controller. 00:29:53.344 [2024-07-20 17:22:09.304233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.344 [2024-07-20 17:22:09.304310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.304977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.305283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.305312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.344 [2024-07-20 17:22:09.305335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.305505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.305696] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.344 [2024-07-20 17:22:09.305720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.344 [2024-07-20 17:22:09.305741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.344 [2024-07-20 17:22:09.308034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.344 [2024-07-20 17:22:09.317276] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.344 [2024-07-20 17:22:09.317913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.318145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.318170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.344 [2024-07-20 17:22:09.318186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.318394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.318561] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.344 [2024-07-20 17:22:09.318585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.344 [2024-07-20 17:22:09.318602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.344 [2024-07-20 17:22:09.321063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.344 [2024-07-20 17:22:09.329836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.344 [2024-07-20 17:22:09.330289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.330791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.330865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.344 [2024-07-20 17:22:09.330884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.331013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.331219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.344 [2024-07-20 17:22:09.331244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.344 [2024-07-20 17:22:09.331260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.344 [2024-07-20 17:22:09.333400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.344 [2024-07-20 17:22:09.342376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.344 [2024-07-20 17:22:09.342804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.343063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.343091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.344 [2024-07-20 17:22:09.343109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.343263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.343415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.344 [2024-07-20 17:22:09.343439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.344 [2024-07-20 17:22:09.343456] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.344 [2024-07-20 17:22:09.345819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.344 [2024-07-20 17:22:09.355141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.344 [2024-07-20 17:22:09.355559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.355835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.355862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.344 [2024-07-20 17:22:09.355878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.356084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.356200] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.344 [2024-07-20 17:22:09.356224] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.344 [2024-07-20 17:22:09.356240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.344 [2024-07-20 17:22:09.358685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.344 [2024-07-20 17:22:09.367634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.344 [2024-07-20 17:22:09.368079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.368432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.344 [2024-07-20 17:22:09.368461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.344 [2024-07-20 17:22:09.368478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.344 [2024-07-20 17:22:09.368662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.344 [2024-07-20 17:22:09.368899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.344 [2024-07-20 17:22:09.368924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.368940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.371150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.380180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.380608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.380921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.380951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.380969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.381099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.381292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.381317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.381333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.383808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.392605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.393052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.393289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.393317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.393335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.393536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.393707] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.393731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.393747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.396016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.405300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.405803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.406045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.406073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.406091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.406275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.406463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.406487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.406503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.408811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.417988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.418506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.418828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.418861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.418879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.419052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.419188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.419218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.419235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.421593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.430478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.431037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.431321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.431350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.431368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.431499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.431670] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.431694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.431710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.434059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.443023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.443543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.443899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.443926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.443942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.444104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.444311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.444335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.444351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.446668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.455646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.456109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.456597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.456646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.456664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.456860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.457031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.457055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.457078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.459435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.468298] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.468757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.469040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.469069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.469087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.469217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.469406] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.469430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.469446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.471783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.480834] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.481549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.481858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.481888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.345 [2024-07-20 17:22:09.481907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.345 [2024-07-20 17:22:09.482055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.345 [2024-07-20 17:22:09.482243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.345 [2024-07-20 17:22:09.482268] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.345 [2024-07-20 17:22:09.482284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.345 [2024-07-20 17:22:09.484640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.345 [2024-07-20 17:22:09.493378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.345 [2024-07-20 17:22:09.493847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.494120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.345 [2024-07-20 17:22:09.494149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.346 [2024-07-20 17:22:09.494167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.346 [2024-07-20 17:22:09.494369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.346 [2024-07-20 17:22:09.494593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.346 [2024-07-20 17:22:09.494618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.346 [2024-07-20 17:22:09.494634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.605 [2024-07-20 17:22:09.496982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.605 [2024-07-20 17:22:09.505916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.605 [2024-07-20 17:22:09.506276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.605 [2024-07-20 17:22:09.506535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.605 [2024-07-20 17:22:09.506561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.605 [2024-07-20 17:22:09.506577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.605 [2024-07-20 17:22:09.506757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.605 [2024-07-20 17:22:09.506902] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.605 [2024-07-20 17:22:09.506927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.605 [2024-07-20 17:22:09.506944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.605 [2024-07-20 17:22:09.509243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.605 [2024-07-20 17:22:09.518332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.518773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.519049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.519075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.519090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.519266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.519473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.519497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.519513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.521902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.531117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.531622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.531899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.531927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.531944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.532092] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.532263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.532287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.532303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.534782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.543615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.544076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.544421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.544450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.544468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.544635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.544787] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.544822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.544838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.547236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.556415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.556842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.557058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.557084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.557100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.557265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.557401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.557425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.557442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.559747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.568785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.569247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.569534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.569562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.569580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.569782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.569980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.570005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.570021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.572358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.581406] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.581889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.582186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.582215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.582233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.582435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.582605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.582630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.582645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.584829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.594193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.594883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.595172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.595200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.595218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.595383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.595517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.595541] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.595557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.597789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.606692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.607123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.607623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.607674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.607692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.607832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.607970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.607994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.608010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.610419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.619283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.619707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.619973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.620009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.620028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.620159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.620329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.620353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.620369] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.622563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.631738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.632173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.632724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.632772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.632789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.632984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.633154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.633178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.633194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.635783] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.644321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.644848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.645127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.645157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.606 [2024-07-20 17:22:09.645175] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.606 [2024-07-20 17:22:09.645323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.606 [2024-07-20 17:22:09.645493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.606 [2024-07-20 17:22:09.645517] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.606 [2024-07-20 17:22:09.645533] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.606 [2024-07-20 17:22:09.647950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.606 [2024-07-20 17:22:09.657007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.606 [2024-07-20 17:22:09.657404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.657859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.606 [2024-07-20 17:22:09.657888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.657910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.658078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.658248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.658272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.658288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.660678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.669574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.670022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.670508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.670558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.670576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.670760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.670958] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.670983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.670999] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.673406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.681934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.682405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.682876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.682905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.682924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.683053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.683205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.683230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.683245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.685527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.694316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.694729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.694998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.695028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.695046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.695199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.695352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.695376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.695392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.697601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.706923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.707382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.707626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.707668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.707686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.707863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.707962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.707985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.708001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.710431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.719699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.720177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.720717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.720767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.720785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.720961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.721095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.721119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.721135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.723618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.732301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.732782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.733057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.733085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.733103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.733251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.733412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.733437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.733452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.735667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.745152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.745612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.745852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.745881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.745899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.746066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.746255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.746279] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.746295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.748613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.607 [2024-07-20 17:22:09.757875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.607 [2024-07-20 17:22:09.758274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.758546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.607 [2024-07-20 17:22:09.758574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.607 [2024-07-20 17:22:09.758592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.607 [2024-07-20 17:22:09.758803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.607 [2024-07-20 17:22:09.758993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.607 [2024-07-20 17:22:09.759017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.607 [2024-07-20 17:22:09.759033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.607 [2024-07-20 17:22:09.761152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.770776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.771234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.771672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.771722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.771740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.771901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.772108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.772132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.772155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.774331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.783331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.783778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.784085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.784113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.784131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.784296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.784501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.784525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.784542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.786960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.795985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.796443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.796910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.796939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.796957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.797177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.797364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.797389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.797405] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.799664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.808798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.809277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.809777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.809846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.809865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.810085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.810254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.810278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.810295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.812653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.821448] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.821968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.822261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.822289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.822308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.822492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.822627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.822650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.822667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.824993] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.834099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.834513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.834946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.834976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.834994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.835196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.835384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.835408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.835424] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.837636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.846671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.847178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.847740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.847791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.847819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.847968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.848102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.848126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.848143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.850404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.859230] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.859884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.860150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.860175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.867 [2024-07-20 17:22:09.860191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.867 [2024-07-20 17:22:09.860396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.867 [2024-07-20 17:22:09.860543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.867 [2024-07-20 17:22:09.860568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.867 [2024-07-20 17:22:09.860584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.867 [2024-07-20 17:22:09.862986] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.867 [2024-07-20 17:22:09.871864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.867 [2024-07-20 17:22:09.872363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.872685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.867 [2024-07-20 17:22:09.872713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.872730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.872925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.873078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.873102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.873118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.875472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.884458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.884889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.885148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.885176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.885194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.885396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.885602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.885626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.885643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.887846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.896991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.897428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.897866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.897897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.897916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.898082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.898288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.898312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.898328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.900574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.909552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.909996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.910239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.910269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.910287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.910453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.910659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.910683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.910699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.913076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.922189] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.922628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.922898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.922929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.922947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.923113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.923283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.923308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.923324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.925811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.934771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.935232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.935550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.935585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.935604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.935816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.936005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.936030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.936047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.938327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.947553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.947999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.948260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.948291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.948309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.948439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.948609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.948633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.948648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.950922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.960129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.960558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.960844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.960875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.960893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.961060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.961248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.961273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.961289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.963625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.972686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.973133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.973596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.973647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.973670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.973849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.974001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.974025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.974041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.976433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.985108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.985557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.985966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.985997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.986014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.986217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.986423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.986448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.986463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:09.988779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:09.997821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:09.998275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.998602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:09.998650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:09.998669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.868 [2024-07-20 17:22:09.998859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.868 [2024-07-20 17:22:09.999028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.868 [2024-07-20 17:22:09.999049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.868 [2024-07-20 17:22:09.999063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.868 [2024-07-20 17:22:10.001580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.868 [2024-07-20 17:22:10.010826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.868 [2024-07-20 17:22:10.011530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:10.011880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.868 [2024-07-20 17:22:10.011928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:53.868 [2024-07-20 17:22:10.011947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:53.869 [2024-07-20 17:22:10.012133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:53.869 [2024-07-20 17:22:10.012322] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.869 [2024-07-20 17:22:10.012348] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.869 [2024-07-20 17:22:10.012366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.869 [2024-07-20 17:22:10.014807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.869 [2024-07-20 17:22:10.023474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.128 [2024-07-20 17:22:10.023834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.024080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.024107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.128 [2024-07-20 17:22:10.024123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.128 [2024-07-20 17:22:10.024321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.128 [2024-07-20 17:22:10.024529] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.128 [2024-07-20 17:22:10.024553] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.128 [2024-07-20 17:22:10.024569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.128 [2024-07-20 17:22:10.026816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.128 [2024-07-20 17:22:10.036214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.128 [2024-07-20 17:22:10.036726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.037009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.037036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.128 [2024-07-20 17:22:10.037053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.128 [2024-07-20 17:22:10.037245] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.128 [2024-07-20 17:22:10.037401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.128 [2024-07-20 17:22:10.037422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.128 [2024-07-20 17:22:10.037435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.128 [2024-07-20 17:22:10.039731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.128 [2024-07-20 17:22:10.048495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.128 [2024-07-20 17:22:10.048965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.049178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.049204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.128 [2024-07-20 17:22:10.049220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.128 [2024-07-20 17:22:10.049455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.128 [2024-07-20 17:22:10.049559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.128 [2024-07-20 17:22:10.049596] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.128 [2024-07-20 17:22:10.049611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.128 [2024-07-20 17:22:10.051884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.128 [2024-07-20 17:22:10.061177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.128 [2024-07-20 17:22:10.061657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.061989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.128 [2024-07-20 17:22:10.062016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.128 [2024-07-20 17:22:10.062033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.128 [2024-07-20 17:22:10.062205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.128 [2024-07-20 17:22:10.062329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.128 [2024-07-20 17:22:10.062348] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.128 [2024-07-20 17:22:10.062361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.064571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.073615] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.073984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.074271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.074296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.074325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.074465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.074588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.074607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.074619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.076896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.086099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.086514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.086753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.086779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.086802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.086970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.087168] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.087194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.087209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.089596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.098671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.099103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.099368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.099394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.099410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.099589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.099738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.099759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.099773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.101950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.111323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.111745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.112000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.112027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.112043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.112205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.112378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.112399] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.112412] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.114875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.124074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.124522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.124808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.124835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.124851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.124984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.125205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.125224] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.125241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.127509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.136531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.137132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.137457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.137485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.137502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.137705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.137887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.137909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.137924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.140343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.149105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.149628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.149961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.149990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.150007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.150195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.150362] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.150381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.150394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.152678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.161828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.162374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.162655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.162681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.162697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.162883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.163031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.163051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.163065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.165437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.174289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.174805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.175043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.175070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.175100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.175209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.175375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.175395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.175407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.177639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.186809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.187284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.187560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.187586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.129 [2024-07-20 17:22:10.187602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.129 [2024-07-20 17:22:10.187788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.129 [2024-07-20 17:22:10.187946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.129 [2024-07-20 17:22:10.187967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.129 [2024-07-20 17:22:10.187981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.129 [2024-07-20 17:22:10.190260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.129 [2024-07-20 17:22:10.199599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.129 [2024-07-20 17:22:10.200044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.129 [2024-07-20 17:22:10.200289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.200314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.200330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.200486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.200638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.200658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.200671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.202981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.130 [2024-07-20 17:22:10.212132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.130 [2024-07-20 17:22:10.212529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.212819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.212857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.212873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.213004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.213161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.213180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.213193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.215409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.130 [2024-07-20 17:22:10.224488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.130 [2024-07-20 17:22:10.224924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.225180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.225219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.225235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.225390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.225571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.225590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.225603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.227957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.130 [2024-07-20 17:22:10.237190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.130 [2024-07-20 17:22:10.237658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.237924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.237950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.237967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.238156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.238308] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.238327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.238340] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.240813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.130 [2024-07-20 17:22:10.249786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.130 [2024-07-20 17:22:10.250465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.250823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.250852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.250869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.251091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.251274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.251294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.251306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.253760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.130 [2024-07-20 17:22:10.262323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.130 [2024-07-20 17:22:10.262799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.263164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.263202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.263235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.263451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.263601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.263621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.263634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.266022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.130 [2024-07-20 17:22:10.274906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.130 [2024-07-20 17:22:10.275423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.275716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.130 [2024-07-20 17:22:10.275743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.130 [2024-07-20 17:22:10.275759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.130 [2024-07-20 17:22:10.275957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.130 [2024-07-20 17:22:10.276088] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.130 [2024-07-20 17:22:10.276108] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.130 [2024-07-20 17:22:10.276122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.130 [2024-07-20 17:22:10.278490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.287588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.288029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.288316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.288347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.288377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.288562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.288683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.288702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.288715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.291245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.300101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.300674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.301068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.301114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.301134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.301332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.301502] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.301526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.301543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.303955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.312701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.313151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.313445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.313470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.313485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.313642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.313765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.313810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.313825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.316066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.325285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.325839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.326115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.326141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.326166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.326338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.326475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.326495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.326507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.328892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.337824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.338494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.338818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.338846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.338863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.339018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.339221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.339241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.339254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.341508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.350480] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.350935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.351225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.351251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.351281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.351436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.351588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.351607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.351620] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.353771] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.363036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.363625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.363922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.363951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.363968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.364113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.364298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.364317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.364330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.366603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.375485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.376024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.376400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.376425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.376440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.376591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.376788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.390 [2024-07-20 17:22:10.376818] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.390 [2024-07-20 17:22:10.376832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.390 [2024-07-20 17:22:10.378996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.390 [2024-07-20 17:22:10.388062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.390 [2024-07-20 17:22:10.388533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.388809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.390 [2024-07-20 17:22:10.388836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.390 [2024-07-20 17:22:10.388852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.390 [2024-07-20 17:22:10.388999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.390 [2024-07-20 17:22:10.389142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.389162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.389175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.391442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.400756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.401307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.401610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.401637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.401654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.401863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.401999] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.402020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.402035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.404168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.413314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.413990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.414321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.414349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.414366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.414512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.414665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.414685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.414698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.416953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.425917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.426349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.426591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.426633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.426649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.426828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.426958] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.426978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.426992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.429287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.438448] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.438910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.439209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.439233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.439248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.439444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.439594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.439613] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.439631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.441899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.450999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.451518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.451850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.451876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.451893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.452067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.452235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.452255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.452267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.454448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.463689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.464168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.464468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.464494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.464510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.464694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.464887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.464909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.464923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.467244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.476195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.476638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.476926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.476953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.476969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.477145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.477302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.477322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.477335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.479704] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.488764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.489396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.489720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.489746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.489762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.489958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.490116] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.490135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.490148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.492399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.501057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.501491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.501747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.501787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.501814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.501977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.502106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.502141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.502155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.504331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.513437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.513975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.514274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.391 [2024-07-20 17:22:10.514299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.391 [2024-07-20 17:22:10.514314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.391 [2024-07-20 17:22:10.514453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.391 [2024-07-20 17:22:10.514605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.391 [2024-07-20 17:22:10.514624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.391 [2024-07-20 17:22:10.514637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.391 [2024-07-20 17:22:10.516929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.391 [2024-07-20 17:22:10.526186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.391 [2024-07-20 17:22:10.526605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.392 [2024-07-20 17:22:10.526922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.392 [2024-07-20 17:22:10.526948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.392 [2024-07-20 17:22:10.526964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.392 [2024-07-20 17:22:10.527145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.392 [2024-07-20 17:22:10.527296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.392 [2024-07-20 17:22:10.527315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.392 [2024-07-20 17:22:10.527328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.392 [2024-07-20 17:22:10.529686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.392 [2024-07-20 17:22:10.538733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.392 [2024-07-20 17:22:10.539182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.392 [2024-07-20 17:22:10.539533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.392 [2024-07-20 17:22:10.539557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.392 [2024-07-20 17:22:10.539572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.392 [2024-07-20 17:22:10.539724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.392 [2024-07-20 17:22:10.539889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.392 [2024-07-20 17:22:10.539910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.392 [2024-07-20 17:22:10.539924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.392 [2024-07-20 17:22:10.542370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.551157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.551736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.552024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.552053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.552069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.552272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.552459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.552481] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.552497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.554618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.563711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.564197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.564434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.564474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.564490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.564673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.564847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.564882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.564895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.567233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.576541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.577031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.577278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.577303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.577319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.577518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.577669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.577688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.577701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.579998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.589126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.589536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.589785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.589834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.589850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.589995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.590126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.590147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.590160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.592405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.601734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.602217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.602501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.602527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.602542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.602696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.602893] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.602914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.602928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.605247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.614368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.614859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.615129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.615154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.615170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.615323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.615461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.615481] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.615494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.617658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.626849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.627359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.627629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.627656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.651 [2024-07-20 17:22:10.627673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.651 [2024-07-20 17:22:10.627850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.651 [2024-07-20 17:22:10.627998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.651 [2024-07-20 17:22:10.628019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.651 [2024-07-20 17:22:10.628033] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.651 [2024-07-20 17:22:10.630384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.651 [2024-07-20 17:22:10.639340] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.651 [2024-07-20 17:22:10.639852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.651 [2024-07-20 17:22:10.640135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.640161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.640182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.640351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.640501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.640520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.640533] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.642853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.651823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.652245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.652493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.652534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.652549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.652747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.652936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.652956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.652970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.655213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.664529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.664925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.665176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.665201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.665217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.665356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.665509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.665529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.665542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.667862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.677096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.677605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.677912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.677936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.677951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.678094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.678258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.678277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.678290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.680635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.689542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.690152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.690481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.690508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.690525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.690686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.690878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.690900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.690914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.693173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.702149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.702579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.702900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.702927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.702944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.703120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.703275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.703295] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.703308] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.705501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.714913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.715532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.715884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.715925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.715942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.716099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.716257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.716277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.716290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.718453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.727548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.727996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.728217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.728242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.728258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.728430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.728583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.728603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.728616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.731029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.740140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.740600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.740882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.740909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.652 [2024-07-20 17:22:10.740925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.652 [2024-07-20 17:22:10.741083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.652 [2024-07-20 17:22:10.741179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.652 [2024-07-20 17:22:10.741213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.652 [2024-07-20 17:22:10.741226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.652 [2024-07-20 17:22:10.743549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.652 [2024-07-20 17:22:10.752726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.652 [2024-07-20 17:22:10.753412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.652 [2024-07-20 17:22:10.753693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.753720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.653 [2024-07-20 17:22:10.753736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.653 [2024-07-20 17:22:10.753990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.653 [2024-07-20 17:22:10.754169] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.653 [2024-07-20 17:22:10.754193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.653 [2024-07-20 17:22:10.754207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.653 [2024-07-20 17:22:10.756314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.653 [2024-07-20 17:22:10.765262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.653 [2024-07-20 17:22:10.765729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.766008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.766035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.653 [2024-07-20 17:22:10.766051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.653 [2024-07-20 17:22:10.766210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.653 [2024-07-20 17:22:10.766376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.653 [2024-07-20 17:22:10.766396] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.653 [2024-07-20 17:22:10.766408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.653 [2024-07-20 17:22:10.768738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.653 [2024-07-20 17:22:10.777747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.653 [2024-07-20 17:22:10.778302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.778680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.778708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.653 [2024-07-20 17:22:10.778725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.653 [2024-07-20 17:22:10.778913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.653 [2024-07-20 17:22:10.779086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.653 [2024-07-20 17:22:10.779106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.653 [2024-07-20 17:22:10.779119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.653 [2024-07-20 17:22:10.781405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.653 [2024-07-20 17:22:10.790548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.653 [2024-07-20 17:22:10.791051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.791334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.791360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.653 [2024-07-20 17:22:10.791391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.653 [2024-07-20 17:22:10.791552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.653 [2024-07-20 17:22:10.791704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.653 [2024-07-20 17:22:10.791723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.653 [2024-07-20 17:22:10.791741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.653 [2024-07-20 17:22:10.794238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.653 [2024-07-20 17:22:10.803254] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.653 [2024-07-20 17:22:10.803668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.803960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.653 [2024-07-20 17:22:10.803988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.653 [2024-07-20 17:22:10.804005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.653 [2024-07-20 17:22:10.804199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.653 [2024-07-20 17:22:10.804304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.653 [2024-07-20 17:22:10.804326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.653 [2024-07-20 17:22:10.804341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.653 [2024-07-20 17:22:10.806503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.912 [2024-07-20 17:22:10.815609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.912 [2024-07-20 17:22:10.816054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.912 [2024-07-20 17:22:10.816344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.912 [2024-07-20 17:22:10.816369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.912 [2024-07-20 17:22:10.816399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.912 [2024-07-20 17:22:10.816491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.912 [2024-07-20 17:22:10.816661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.912 [2024-07-20 17:22:10.816685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.912 [2024-07-20 17:22:10.816701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.912 [2024-07-20 17:22:10.819021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.912 [2024-07-20 17:22:10.828260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.912 [2024-07-20 17:22:10.828901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.912 [2024-07-20 17:22:10.829174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.912 [2024-07-20 17:22:10.829199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.829215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.829385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.829523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.829542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.829555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.831844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.840784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.841519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.841838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.841867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.841884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.842019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.842193] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.842213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.842226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.844477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.853332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.853756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.854054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.854081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.854096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.854251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.854389] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.854409] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.854421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.856877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.865783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.866294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.866551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.866576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.866592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.866758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.866957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.866979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.866993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.869269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.878407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.878921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.879186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.879212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.879228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.879416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.879597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.879617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.879629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.882133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.891032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.891548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.891803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.891830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.891846] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.892038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.892236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.892255] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.892268] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.894464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.903438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.903908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.904210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.904250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.904266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.904359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.904542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.904562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.904575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.906972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.915970] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.916425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.916704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.916730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.916759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.913 [2024-07-20 17:22:10.916942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.913 [2024-07-20 17:22:10.917087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.913 [2024-07-20 17:22:10.917122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.913 [2024-07-20 17:22:10.917136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.913 [2024-07-20 17:22:10.919474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.913 [2024-07-20 17:22:10.928501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.913 [2024-07-20 17:22:10.928975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.929256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.913 [2024-07-20 17:22:10.929281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.913 [2024-07-20 17:22:10.929296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:10.929481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:10.929634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:10.929653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:10.929666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:10.931981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:10.940956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:10.941374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.941770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.941815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:10.941830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:10.941954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:10.942123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:10.942142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:10.942155] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:10.944478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:10.953355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:10.953882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.954122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.954155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:10.954172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:10.954326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:10.954515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:10.954539] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:10.954555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:10.956637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:10.965758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:10.966224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.966488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.966517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:10.966535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:10.966701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:10.966842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:10.966873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:10.966886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:10.969294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:10.978446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:10.978857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.979123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.979151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:10.979169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:10.979334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:10.979522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:10.979546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:10.979562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:10.981913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:10.990919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:10.991625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.992012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:10.992042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:10.992065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:10.992214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:10.992420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:10.992444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:10.992460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:10.994641] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:11.003493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:11.003935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:11.004192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:11.004220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:11.004238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:11.004386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:11.004575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:11.004599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:11.004614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:11.007017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:11.016128] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:11.016613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:11.016913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:11.016942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:11.016960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:11.017127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.914 [2024-07-20 17:22:11.017297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.914 [2024-07-20 17:22:11.017321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.914 [2024-07-20 17:22:11.017337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.914 [2024-07-20 17:22:11.019640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.914 [2024-07-20 17:22:11.028669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.914 [2024-07-20 17:22:11.029104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:11.029459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.914 [2024-07-20 17:22:11.029483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.914 [2024-07-20 17:22:11.029498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.914 [2024-07-20 17:22:11.029670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.915 [2024-07-20 17:22:11.029841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.915 [2024-07-20 17:22:11.029866] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.915 [2024-07-20 17:22:11.029882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.915 [2024-07-20 17:22:11.032221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.915 [2024-07-20 17:22:11.041425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.915 [2024-07-20 17:22:11.041892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.915 [2024-07-20 17:22:11.042177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.915 [2024-07-20 17:22:11.042200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.915 [2024-07-20 17:22:11.042216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.915 [2024-07-20 17:22:11.042399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.915 [2024-07-20 17:22:11.042605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.915 [2024-07-20 17:22:11.042630] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.915 [2024-07-20 17:22:11.042645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.915 [2024-07-20 17:22:11.044864] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.915 [2024-07-20 17:22:11.054155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.915 [2024-07-20 17:22:11.054598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.915 [2024-07-20 17:22:11.054912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.915 [2024-07-20 17:22:11.054943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.915 [2024-07-20 17:22:11.054960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.915 [2024-07-20 17:22:11.055127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.915 [2024-07-20 17:22:11.055279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.915 [2024-07-20 17:22:11.055301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.915 [2024-07-20 17:22:11.055315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.915 [2024-07-20 17:22:11.057459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.915 [2024-07-20 17:22:11.066631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.915 [2024-07-20 17:22:11.067146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.915 [2024-07-20 17:22:11.067534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.915 [2024-07-20 17:22:11.067563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:54.915 [2024-07-20 17:22:11.067581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:54.915 [2024-07-20 17:22:11.067747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:54.915 [2024-07-20 17:22:11.067952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.915 [2024-07-20 17:22:11.067978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.915 [2024-07-20 17:22:11.067994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.188 [2024-07-20 17:22:11.070187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.188 [2024-07-20 17:22:11.079417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.188 [2024-07-20 17:22:11.079852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.080122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.080150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.188 [2024-07-20 17:22:11.080168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.188 [2024-07-20 17:22:11.080351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.188 [2024-07-20 17:22:11.080521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.188 [2024-07-20 17:22:11.080545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.188 [2024-07-20 17:22:11.080561] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.188 [2024-07-20 17:22:11.082946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.188 [2024-07-20 17:22:11.091923] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.188 [2024-07-20 17:22:11.092394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.092842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.092872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.188 [2024-07-20 17:22:11.092889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.188 [2024-07-20 17:22:11.093055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.188 [2024-07-20 17:22:11.093189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.188 [2024-07-20 17:22:11.093213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.188 [2024-07-20 17:22:11.093229] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.188 [2024-07-20 17:22:11.095712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.188 [2024-07-20 17:22:11.104311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.188 [2024-07-20 17:22:11.104841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.105128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.105157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.188 [2024-07-20 17:22:11.105174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.188 [2024-07-20 17:22:11.105323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.188 [2024-07-20 17:22:11.105493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.188 [2024-07-20 17:22:11.105522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.188 [2024-07-20 17:22:11.105539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.188 [2024-07-20 17:22:11.107852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.188 [2024-07-20 17:22:11.116955] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.188 [2024-07-20 17:22:11.117540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.117885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.117918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.188 [2024-07-20 17:22:11.117936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.188 [2024-07-20 17:22:11.118128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.188 [2024-07-20 17:22:11.118281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.188 [2024-07-20 17:22:11.118305] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.188 [2024-07-20 17:22:11.118322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.188 [2024-07-20 17:22:11.120608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.188 [2024-07-20 17:22:11.129496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.188 [2024-07-20 17:22:11.129977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-20 17:22:11.130264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.130293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.130312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.130442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.130613] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.130638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.130653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.132586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.142238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.142861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.143112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.143155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.143173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.143321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.143473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.143497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.143519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.145906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.154746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.155205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.155442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.155467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.155483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.155595] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.155776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.155812] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.155829] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.158062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.167237] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.167716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.168038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.168065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.168081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.168266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.168445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.168470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.168486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.170894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.179864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.180363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.180710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.180784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.180812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.180998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.181132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.181157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.181173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.183458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.192557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.193028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.193531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.193581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.193599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.193765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.193909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.193935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.193951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.196320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.205281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.205818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.206053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.206082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.206100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.206265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.206436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.206460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.206476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.209045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.217953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.218430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.218891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.218922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.218941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.219071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.219205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.219230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.219245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.221405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.230624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.231281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.231787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.231861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.231877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.232073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.232302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.232328] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.232344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.234471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.243204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.243856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.244103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.244129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.189 [2024-07-20 17:22:11.244145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.189 [2024-07-20 17:22:11.244308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.189 [2024-07-20 17:22:11.244500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.189 [2024-07-20 17:22:11.244525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.189 [2024-07-20 17:22:11.244541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.189 [2024-07-20 17:22:11.246935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.189 [2024-07-20 17:22:11.255665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.189 [2024-07-20 17:22:11.256097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.256394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-20 17:22:11.256424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.256442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.190 [2024-07-20 17:22:11.256645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.190 [2024-07-20 17:22:11.256808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.190 [2024-07-20 17:22:11.256833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.190 [2024-07-20 17:22:11.256849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.190 [2024-07-20 17:22:11.259277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.190 [2024-07-20 17:22:11.268075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.190 [2024-07-20 17:22:11.268506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.268918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.268948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.268967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.190 [2024-07-20 17:22:11.269134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.190 [2024-07-20 17:22:11.269286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.190 [2024-07-20 17:22:11.269310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.190 [2024-07-20 17:22:11.269326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.190 [2024-07-20 17:22:11.271664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.190 [2024-07-20 17:22:11.280602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.190 [2024-07-20 17:22:11.281094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.281395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.281442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.281461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.190 [2024-07-20 17:22:11.281645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.190 [2024-07-20 17:22:11.281807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.190 [2024-07-20 17:22:11.281832] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.190 [2024-07-20 17:22:11.281849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.190 [2024-07-20 17:22:11.284292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.190 [2024-07-20 17:22:11.293099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.190 [2024-07-20 17:22:11.293522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.293913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.293942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.293960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.190 [2024-07-20 17:22:11.294126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.190 [2024-07-20 17:22:11.294297] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.190 [2024-07-20 17:22:11.294322] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.190 [2024-07-20 17:22:11.294338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.190 [2024-07-20 17:22:11.296495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.190 [2024-07-20 17:22:11.305596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.190 [2024-07-20 17:22:11.306089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.306374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.306421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.306445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.190 [2024-07-20 17:22:11.306594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.190 [2024-07-20 17:22:11.306740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.190 [2024-07-20 17:22:11.306762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.190 [2024-07-20 17:22:11.306777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.190 [2024-07-20 17:22:11.309129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.190 [2024-07-20 17:22:11.318407] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.190 [2024-07-20 17:22:11.318859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.319124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.319154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.319172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.190 [2024-07-20 17:22:11.319338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.190 [2024-07-20 17:22:11.319508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.190 [2024-07-20 17:22:11.319532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.190 [2024-07-20 17:22:11.319548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.190 [2024-07-20 17:22:11.321802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.190 [2024-07-20 17:22:11.330939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.190 [2024-07-20 17:22:11.331353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.331597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.190 [2024-07-20 17:22:11.331626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.190 [2024-07-20 17:22:11.331644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.449 [2024-07-20 17:22:11.331822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.449 [2024-07-20 17:22:11.332010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.449 [2024-07-20 17:22:11.332034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.449 [2024-07-20 17:22:11.332050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.449 [2024-07-20 17:22:11.334317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.449 [2024-07-20 17:22:11.343578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.449 [2024-07-20 17:22:11.344009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.449 [2024-07-20 17:22:11.344251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.449 [2024-07-20 17:22:11.344280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.449 [2024-07-20 17:22:11.344298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.449 [2024-07-20 17:22:11.344506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.449 [2024-07-20 17:22:11.344712] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.449 [2024-07-20 17:22:11.344737] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.449 [2024-07-20 17:22:11.344753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.347226] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.356170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.356633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.356885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.356911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.356926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.357119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.357272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.357296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.357312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.359632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.368698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.369173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.369625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.369673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.369691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.369870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.370022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.370047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.370063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.372327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.381420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.381959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.382444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.382481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.382495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.382632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.382818] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.382843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.382860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.385234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.393993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.394450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.394923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.394952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.394970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.395172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.395342] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.395366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.395383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.397630] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.406604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.407047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.407554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.407603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.407621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.407834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.407986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.408010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.408026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.410330] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.419307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.419727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.420212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.420277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.420297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.420504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.420658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.420689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.420706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.422839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.431891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.432388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.432610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.432640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.432658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.432788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.432953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.432977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.432993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.435223] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.444443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.444944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.445247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.445272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.445289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.445475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.445673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.445697] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.445713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.448028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.456869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.457318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.457847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.457876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.457894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.458077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.458248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.458272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.458295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.450 [2024-07-20 17:22:11.460651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.450 [2024-07-20 17:22:11.469466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.450 [2024-07-20 17:22:11.469838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.470135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.450 [2024-07-20 17:22:11.470164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.450 [2024-07-20 17:22:11.470182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.450 [2024-07-20 17:22:11.470366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.450 [2024-07-20 17:22:11.470554] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.450 [2024-07-20 17:22:11.470579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.450 [2024-07-20 17:22:11.470595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.473050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.481942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.482400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.482682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.482713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.482731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.482873] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.483044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.483069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.483085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.485405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.494367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.494821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.495108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.495133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.495148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.495315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.495467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.495491] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.495507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.498136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.507040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.507472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.507926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.507955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.507972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.508157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.508309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.508333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.508350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.510632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.519861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.520331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.520827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.520889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.520906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.521054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.521242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.521267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.521283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.523743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.532212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.532644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.532974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.533003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.533021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.533205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.533393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.533417] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.533433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.535682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.544736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.545137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.545418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.545446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.545464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.545612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.545829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.545854] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.545870] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.548208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.557418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.557890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.558171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.558200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.558217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.558347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.558535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.558573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.558588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.560785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.570123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.570517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.570922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.570952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.570970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.571189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.571378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.571402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.571418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.573754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.582743] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.583176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.583654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.583714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.583732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.583926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.584151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.584176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.584191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.586472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.451 [2024-07-20 17:22:11.595449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.451 [2024-07-20 17:22:11.595963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.596254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.451 [2024-07-20 17:22:11.596283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.451 [2024-07-20 17:22:11.596301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.451 [2024-07-20 17:22:11.596467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.451 [2024-07-20 17:22:11.596637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.451 [2024-07-20 17:22:11.596662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.451 [2024-07-20 17:22:11.596678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.451 [2024-07-20 17:22:11.599059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.710 [2024-07-20 17:22:11.608377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.710 [2024-07-20 17:22:11.608932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.609200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.609229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.710 [2024-07-20 17:22:11.609247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.710 [2024-07-20 17:22:11.609412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.710 [2024-07-20 17:22:11.609546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.710 [2024-07-20 17:22:11.609570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.710 [2024-07-20 17:22:11.609586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.710 [2024-07-20 17:22:11.611969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.710 [2024-07-20 17:22:11.620984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.710 [2024-07-20 17:22:11.621374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.621685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.621739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.710 [2024-07-20 17:22:11.621757] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.710 [2024-07-20 17:22:11.621933] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.710 [2024-07-20 17:22:11.622050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.710 [2024-07-20 17:22:11.622074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.710 [2024-07-20 17:22:11.622090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.710 [2024-07-20 17:22:11.624443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.710 [2024-07-20 17:22:11.633588] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.710 [2024-07-20 17:22:11.634004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.634265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.634293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.710 [2024-07-20 17:22:11.634311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.710 [2024-07-20 17:22:11.634513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.710 [2024-07-20 17:22:11.634665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.710 [2024-07-20 17:22:11.634689] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.710 [2024-07-20 17:22:11.634705] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.710 [2024-07-20 17:22:11.637105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.710 [2024-07-20 17:22:11.646095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.710 [2024-07-20 17:22:11.646522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.646917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.710 [2024-07-20 17:22:11.646947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.710 [2024-07-20 17:22:11.646965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.710 [2024-07-20 17:22:11.647149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.710 [2024-07-20 17:22:11.647319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.710 [2024-07-20 17:22:11.647344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.710 [2024-07-20 17:22:11.647360] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.710 [2024-07-20 17:22:11.649748] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.658560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.659008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.659274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.659303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.659326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.659546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.659716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.659740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.659756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.661939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.671083] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.671514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.671816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.671846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.671864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.672012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.672183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.672207] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.672223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.674523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.683662] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.684107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.684418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.684465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.684483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.684702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.684921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.684945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.684961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.687297] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.696345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.696788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.697089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.697118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.697136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.697325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.697512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.697537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.697552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.699865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.708802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.709254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.709566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.709606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.709622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.709836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.710020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.710044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.710059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.712378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.721292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.721738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.722006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.722037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.722055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.722258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.722429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.722453] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.722469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.724789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.733822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.734452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.734923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.734953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.734970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.735136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.735312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.735337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.735352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.737525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.746502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.746985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.747467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.747518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.747536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.747721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.747920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.747945] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.747961] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.750225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.759052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.759440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.759892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.759921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.759939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.760160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.760312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.760337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.760352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.762670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.711 [2024-07-20 17:22:11.771830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.711 [2024-07-20 17:22:11.772254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.772775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.711 [2024-07-20 17:22:11.772850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.711 [2024-07-20 17:22:11.772869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.711 [2024-07-20 17:22:11.772981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.711 [2024-07-20 17:22:11.773132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.711 [2024-07-20 17:22:11.773162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.711 [2024-07-20 17:22:11.773178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.711 [2024-07-20 17:22:11.775464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.784293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.784920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.785215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.785244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.785262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.785429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.785581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.785606] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.785622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.787875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.796814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.797218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.797499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.797551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.797569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.797735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.797915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.797940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.797956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.800460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.809487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.809932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.810216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.810244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.810262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.810477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.810647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.810669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.810689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.813104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.822021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.822743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.823073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.823105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.823123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.823272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.823424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.823447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.823463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.825848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.834649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.835045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.835350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.835379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.835397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.835509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.835679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.835704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.835720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.837952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.847261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.847906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.848154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.848184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.848202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.848333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.848521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.848545] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.848562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.850753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.712 [2024-07-20 17:22:11.859921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.712 [2024-07-20 17:22:11.860564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.860845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.712 [2024-07-20 17:22:11.860875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.712 [2024-07-20 17:22:11.860893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.712 [2024-07-20 17:22:11.861077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.712 [2024-07-20 17:22:11.861270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.712 [2024-07-20 17:22:11.861294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.712 [2024-07-20 17:22:11.861311] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.712 [2024-07-20 17:22:11.863711] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.971 [2024-07-20 17:22:11.872670] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.971 [2024-07-20 17:22:11.873087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.971 [2024-07-20 17:22:11.873322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.971 [2024-07-20 17:22:11.873351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.971 [2024-07-20 17:22:11.873369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.971 [2024-07-20 17:22:11.873553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.971 [2024-07-20 17:22:11.873722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.971 [2024-07-20 17:22:11.873746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.971 [2024-07-20 17:22:11.873762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.971 [2024-07-20 17:22:11.876088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.971 [2024-07-20 17:22:11.885317] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.971 [2024-07-20 17:22:11.885789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.971 [2024-07-20 17:22:11.886069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.971 [2024-07-20 17:22:11.886098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.971 [2024-07-20 17:22:11.886116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.971 [2024-07-20 17:22:11.886335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.971 [2024-07-20 17:22:11.886505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.886529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.886545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.888800] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.897991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.898432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.898809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.898839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.898856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.899023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.899157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.899181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.899197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.901425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.910840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.911402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.911664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.911692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.911710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.911868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.912039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.912063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.912079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.914467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.923514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.923971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.924268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.924293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.924323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.924497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.924667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.924691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.924707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.927340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.935960] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.936428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.936874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.936904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.936921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.937069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.937275] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.937300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.937315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.939670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.948487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.948948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.949171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.949199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.949217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.949419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.949571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.949595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.949611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.952138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.961029] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.961478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.961717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.961746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.961763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.961956] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.962127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.962151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.962167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.964425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.973528] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.973990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.974239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.974264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.974285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.974457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.974573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.974597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.974613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.977068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.986318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.986904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.987191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.987219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.987237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:11.987403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:11.987555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:11.987579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:11.987595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:11.989787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:11.999053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:11.999613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.999921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:11.999959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:11.999977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:12.000143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:12.000314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.972 [2024-07-20 17:22:12.000338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.972 [2024-07-20 17:22:12.000354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.972 [2024-07-20 17:22:12.002719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.972 [2024-07-20 17:22:12.011773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.972 [2024-07-20 17:22:12.012208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:12.012528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.972 [2024-07-20 17:22:12.012557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.972 [2024-07-20 17:22:12.012575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.972 [2024-07-20 17:22:12.012728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.972 [2024-07-20 17:22:12.012911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.012936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.012952] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.015360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.024408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.024828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.025093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.025118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.025133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.025302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.025514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.025538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.025554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.027954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.037023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.037425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.037910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.037939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.037957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.038106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.038294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.038318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.038334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.040527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.049570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.050047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.050387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.050433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.050451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.050671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.050876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.050902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.050918] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.053093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.062282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.062689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.062932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.062963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.062981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.063183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.063354] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.063376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.063391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.065820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.074754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.075242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.075729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.075777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.075804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.076009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.076215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.076239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.076255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.078447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.087279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.087711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.087972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.087998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.088015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.088220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.088390] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.088423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.088440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.090836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.099846] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.100291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.100625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.100654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.100672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.100811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.100963] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.100987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.101003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.103411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.112525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.112994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.113389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.113444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.113465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.113655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.113838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.113863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.113879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.116227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.973 [2024-07-20 17:22:12.125241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.973 [2024-07-20 17:22:12.125679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.125891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.973 [2024-07-20 17:22:12.125918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:55.973 [2024-07-20 17:22:12.125935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:55.973 [2024-07-20 17:22:12.126110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:55.973 [2024-07-20 17:22:12.126270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.973 [2024-07-20 17:22:12.126294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.973 [2024-07-20 17:22:12.126316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.973 [2024-07-20 17:22:12.128354] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.137843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.138278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.138504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.138532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.138550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.138716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.138915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.138940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.138956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.140951] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.150171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.150652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.150900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.150929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.150947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.151150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.151284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.151308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.151324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.153646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.162803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.163226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.163761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.163820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.163839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.164041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.164211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.164235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.164251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.166559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.175436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.175814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.176076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.176105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.176123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.176271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.176405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.176429] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.176445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.178660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.187959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.188403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.188725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.188776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.188806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.188993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.189199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.189223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.189240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.191454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.200752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.201186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.201644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.201695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.201712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.201874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.202044] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.202068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.202085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.204369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.213282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.213803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.214096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.214125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.214143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.214290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.214460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.234 [2024-07-20 17:22:12.214484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.234 [2024-07-20 17:22:12.214500] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.234 [2024-07-20 17:22:12.216699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.234 [2024-07-20 17:22:12.226070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.234 [2024-07-20 17:22:12.226698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.226974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.234 [2024-07-20 17:22:12.227003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.234 [2024-07-20 17:22:12.227021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.234 [2024-07-20 17:22:12.227186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.234 [2024-07-20 17:22:12.227357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.227381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.227397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.229573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.238571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.239022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.239523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.239574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.239591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.239804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.240011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.240035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.240051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.242388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.251277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.251842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.252143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.252183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.252198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.252379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.252577] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.252602] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.252618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.255074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.264126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.264560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.264855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.264884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.264902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.265086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.265292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.265317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.265333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.267507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.276529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.276962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.277195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.277223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.277242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.277426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.277633] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.277658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.277674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.280108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 664644 Killed "${NVMF_APP[@]}" "$@" 00:29:56.235 17:22:12 -- host/bdevperf.sh@36 -- # tgt_init 00:29:56.235 17:22:12 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:56.235 17:22:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:56.235 17:22:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:56.235 17:22:12 -- common/autotest_common.sh@10 -- # set +x 00:29:56.235 [2024-07-20 17:22:12.289292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.289682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.289985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.290033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.290051] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.290199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.290387] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.290411] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.290427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 17:22:12 -- nvmf/common.sh@469 -- # nvmfpid=665663 00:29:56.235 17:22:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:56.235 [2024-07-20 17:22:12.292675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 17:22:12 -- nvmf/common.sh@470 -- # waitforlisten 665663 00:29:56.235 17:22:12 -- common/autotest_common.sh@819 -- # '[' -z 665663 ']' 00:29:56.235 17:22:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.235 17:22:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:56.235 17:22:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.235 17:22:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:56.235 17:22:12 -- common/autotest_common.sh@10 -- # set +x 00:29:56.235 [2024-07-20 17:22:12.301655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.302113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.302425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.302472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.302490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.302658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.302820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.302845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.302862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.305207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.314038] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.314403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.314675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.314701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.314717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.314902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.315055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.315078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.315092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.317389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.326429] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.326881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.327126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.327167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.235 [2024-07-20 17:22:12.327184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.235 [2024-07-20 17:22:12.327357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.235 [2024-07-20 17:22:12.327484] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.235 [2024-07-20 17:22:12.327505] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.235 [2024-07-20 17:22:12.327519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.235 [2024-07-20 17:22:12.329486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.235 [2024-07-20 17:22:12.335440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:56.235 [2024-07-20 17:22:12.335508] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.235 [2024-07-20 17:22:12.338641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.235 [2024-07-20 17:22:12.339089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.339337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.235 [2024-07-20 17:22:12.339377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.236 [2024-07-20 17:22:12.339393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.236 [2024-07-20 17:22:12.339593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.236 [2024-07-20 17:22:12.339748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.236 [2024-07-20 17:22:12.339782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.236 [2024-07-20 17:22:12.339805] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.236 [2024-07-20 17:22:12.341882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.236 [2024-07-20 17:22:12.350917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.236 [2024-07-20 17:22:12.351309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.351521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.351551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.236 [2024-07-20 17:22:12.351568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.236 [2024-07-20 17:22:12.351726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.236 [2024-07-20 17:22:12.351862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.236 [2024-07-20 17:22:12.351884] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.236 [2024-07-20 17:22:12.351898] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.236 [2024-07-20 17:22:12.353731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.236 [2024-07-20 17:22:12.362984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.236 [2024-07-20 17:22:12.363412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.363743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.363786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.236 [2024-07-20 17:22:12.363810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.236 [2024-07-20 17:22:12.364023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.236 [2024-07-20 17:22:12.364185] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.236 [2024-07-20 17:22:12.364205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.236 [2024-07-20 17:22:12.364218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.236 [2024-07-20 17:22:12.366290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.236 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.236 [2024-07-20 17:22:12.375333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.236 [2024-07-20 17:22:12.375754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.375987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.376013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.236 [2024-07-20 17:22:12.376029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.236 [2024-07-20 17:22:12.376195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.236 [2024-07-20 17:22:12.376372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.236 [2024-07-20 17:22:12.376393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.236 [2024-07-20 17:22:12.376407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.236 [2024-07-20 17:22:12.378502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.236 [2024-07-20 17:22:12.387879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.236 [2024-07-20 17:22:12.388283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.388534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.236 [2024-07-20 17:22:12.388563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.236 [2024-07-20 17:22:12.388587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.236 [2024-07-20 17:22:12.388736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.236 [2024-07-20 17:22:12.388908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.236 [2024-07-20 17:22:12.388931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.236 [2024-07-20 17:22:12.388946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.495 [2024-07-20 17:22:12.391296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.495 [2024-07-20 17:22:12.400434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.495 [2024-07-20 17:22:12.400881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.495 [2024-07-20 17:22:12.401142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.401171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.401188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.401355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.401542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.401567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.401582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.403741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.406616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:56.496 [2024-07-20 17:22:12.412844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.413385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.413775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.413811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.413832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.414001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.414174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.414199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.414217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.416485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.425420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.426035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.426383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.426413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.426444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.426622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.426837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.426859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.426875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.429124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.437789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.438271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.438567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.438596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.438614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.438829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.438996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.439017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.439031] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.441364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.450159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.450593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.450888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.450915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.450931] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.451062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.451224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.451249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.451266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.453474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.462757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.463418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.463709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.463736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.463772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.463978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.464100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.464125] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.464146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.466406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.475361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.475868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.476217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.476249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.476268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.476400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.476589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.476614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.476630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.478944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.487912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.488379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.488710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.488740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.488758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.488940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.489109] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.489134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.489150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.491475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.497202] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:56.496 [2024-07-20 17:22:12.497340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.496 [2024-07-20 17:22:12.497359] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.496 [2024-07-20 17:22:12.497372] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.496 [2024-07-20 17:22:12.497425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.496 [2024-07-20 17:22:12.497452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.496 [2024-07-20 17:22:12.497455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.496 [2024-07-20 17:22:12.500145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.500642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.500897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.500926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.500944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.501112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.496 [2024-07-20 17:22:12.501310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.496 [2024-07-20 17:22:12.501331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.496 [2024-07-20 17:22:12.501346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.496 [2024-07-20 17:22:12.503536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.496 [2024-07-20 17:22:12.512490] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.496 [2024-07-20 17:22:12.513133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.513451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.496 [2024-07-20 17:22:12.513479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.496 [2024-07-20 17:22:12.513499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.496 [2024-07-20 17:22:12.513661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.513857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.513881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.513899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.515814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.524900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.525594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.525920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.525950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.525971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.526134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.526340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.526362] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.526380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.528437] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.537206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.537888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.538218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.538248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.538270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.538389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.538514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.538536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.538555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.540570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.549697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.550326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.550646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.550676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.550697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.550844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.550987] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.551010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.551029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.553131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.562112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.562812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.563066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.563093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.563114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.563273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.563445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.563467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.563484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.565560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.574723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.575193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.575457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.575495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.575518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.575713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.575945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.575969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.575988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.578097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.587115] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.587498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.587738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.587766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.587783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.587980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.588117] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.588139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.588153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.590309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.599401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.599852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.600115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.600141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.600158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.600292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.600443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.600464] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.600478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.602455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.611735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.612189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.612431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.612458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.612480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.612631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.612750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.612771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.612812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.615023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.624095] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.624532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.624777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.624810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.624827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.624994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.497 [2024-07-20 17:22:12.625161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.497 [2024-07-20 17:22:12.625183] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.497 [2024-07-20 17:22:12.625198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.497 [2024-07-20 17:22:12.627199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.497 [2024-07-20 17:22:12.636244] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.497 [2024-07-20 17:22:12.636633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.636911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.497 [2024-07-20 17:22:12.636938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.497 [2024-07-20 17:22:12.636954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.497 [2024-07-20 17:22:12.637151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.498 [2024-07-20 17:22:12.637315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.498 [2024-07-20 17:22:12.637336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.498 [2024-07-20 17:22:12.637350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.498 [2024-07-20 17:22:12.639474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.498 [2024-07-20 17:22:12.648684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.498 [2024-07-20 17:22:12.649089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-20 17:22:12.649346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.498 [2024-07-20 17:22:12.649372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.498 [2024-07-20 17:22:12.649388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.498 [2024-07-20 17:22:12.649542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.498 [2024-07-20 17:22:12.649663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.498 [2024-07-20 17:22:12.649685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.498 [2024-07-20 17:22:12.649699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.498 [2024-07-20 17:22:12.651780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.661052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.661450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.661682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.661707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.661723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.661880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.662034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.662056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.662085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.664189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.673274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.673684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.673905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.673932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.673949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.674131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.674281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.674302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.674316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.676303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.685530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.685938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.686181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.686207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.686222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.686388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.686559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.686580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.686594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.688645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.697851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.698265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.698501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.698526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.698542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.698676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.698821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.698858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.698873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.701004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.710133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.710554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.710826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.710852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.710868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.711018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.711215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.711237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.711251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.713414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.722383] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.722827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.723068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.723094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.723110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.723227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.723425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.723454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.723468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.725429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.734827] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.735261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.735508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.735534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.757 [2024-07-20 17:22:12.735550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.757 [2024-07-20 17:22:12.735684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.757 [2024-07-20 17:22:12.735892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.757 [2024-07-20 17:22:12.735914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.757 [2024-07-20 17:22:12.735929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.757 [2024-07-20 17:22:12.738071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.757 [2024-07-20 17:22:12.747208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.757 [2024-07-20 17:22:12.747578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.747809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.757 [2024-07-20 17:22:12.747842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.747863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.748000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.748199] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.748220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.748234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.750222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.759548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.760014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.760258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.760284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.760300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.760449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.760630] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.760652] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.760671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.762585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.771956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.772356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.772589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.772614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.772630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.772779] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.772941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.772963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.772978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.775027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.784356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.784801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.785034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.785059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.785076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.785223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.785372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.785393] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.785406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.787608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.796533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.796891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.797130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.797157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.797174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.797340] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.797505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.797526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.797540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.799680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.808636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.809012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.809252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.809280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.809296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.809446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.809628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.809649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.809663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.811682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.821093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.821557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.821811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.821838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.821854] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.822003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.822172] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.822194] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.822209] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.824151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.833573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.833980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.834216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.834241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.834257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.834439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.834603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.834624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.834639] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.836639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.845874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.846277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.846483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.846508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.846524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.846690] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.846882] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.846904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.846919] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.848922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.858216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.858663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.858911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.858938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.758 [2024-07-20 17:22:12.858954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.758 [2024-07-20 17:22:12.859071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.758 [2024-07-20 17:22:12.859223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.758 [2024-07-20 17:22:12.859245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.758 [2024-07-20 17:22:12.859259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.758 [2024-07-20 17:22:12.861127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.758 [2024-07-20 17:22:12.870562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.758 [2024-07-20 17:22:12.870945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.758 [2024-07-20 17:22:12.871162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.871188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.759 [2024-07-20 17:22:12.871205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.759 [2024-07-20 17:22:12.871386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.759 [2024-07-20 17:22:12.871567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.759 [2024-07-20 17:22:12.871588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.759 [2024-07-20 17:22:12.871602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.759 [2024-07-20 17:22:12.873681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.759 [2024-07-20 17:22:12.882876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.759 [2024-07-20 17:22:12.883368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.883609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.883635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.759 [2024-07-20 17:22:12.883650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.759 [2024-07-20 17:22:12.883809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.759 [2024-07-20 17:22:12.883963] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.759 [2024-07-20 17:22:12.883985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.759 [2024-07-20 17:22:12.884000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.759 [2024-07-20 17:22:12.885969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.759 [2024-07-20 17:22:12.895282] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.759 [2024-07-20 17:22:12.895699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.895912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.895939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.759 [2024-07-20 17:22:12.895956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.759 [2024-07-20 17:22:12.896089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.759 [2024-07-20 17:22:12.896238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.759 [2024-07-20 17:22:12.896259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.759 [2024-07-20 17:22:12.896274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.759 [2024-07-20 17:22:12.898325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.759 [2024-07-20 17:22:12.907516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.759 [2024-07-20 17:22:12.907920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.908155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.759 [2024-07-20 17:22:12.908181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:56.759 [2024-07-20 17:22:12.908197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:56.759 [2024-07-20 17:22:12.908348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:56.759 [2024-07-20 17:22:12.908515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.759 [2024-07-20 17:22:12.908536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.759 [2024-07-20 17:22:12.908551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.759 [2024-07-20 17:22:12.910483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.017 [2024-07-20 17:22:12.920116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.017 [2024-07-20 17:22:12.920533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.920776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.920810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.017 [2024-07-20 17:22:12.920833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.017 [2024-07-20 17:22:12.920983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.017 [2024-07-20 17:22:12.921134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.017 [2024-07-20 17:22:12.921156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.017 [2024-07-20 17:22:12.921170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.017 [2024-07-20 17:22:12.923025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.017 [2024-07-20 17:22:12.932457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.017 [2024-07-20 17:22:12.932863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.933102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.933128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.017 [2024-07-20 17:22:12.933144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.017 [2024-07-20 17:22:12.933277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.017 [2024-07-20 17:22:12.933459] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.017 [2024-07-20 17:22:12.933480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.017 [2024-07-20 17:22:12.933494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.017 [2024-07-20 17:22:12.935512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.017 [2024-07-20 17:22:12.944596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.017 [2024-07-20 17:22:12.945003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.945210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.945235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.017 [2024-07-20 17:22:12.945251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.017 [2024-07-20 17:22:12.945429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.017 [2024-07-20 17:22:12.945578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.017 [2024-07-20 17:22:12.945599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.017 [2024-07-20 17:22:12.945612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.017 [2024-07-20 17:22:12.947636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.017 [2024-07-20 17:22:12.957006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.017 [2024-07-20 17:22:12.957439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.017 [2024-07-20 17:22:12.957663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.957688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:12.957704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:12.957851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:12.957989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:12.958010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:12.958025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:12.960145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:12.969562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:12.970021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.970279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.970306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:12.970322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:12.970472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:12.970654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:12.970675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:12.970690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:12.972802] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:12.981667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:12.982063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.982300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.982326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:12.982342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:12.982475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:12.982658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:12.982680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:12.982694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:12.984844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:12.994112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:12.994510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.994750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:12.994776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:12.994800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:12.994967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:12.995158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:12.995180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:12.995194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:12.997274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.006438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.006837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.007071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.007097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.007113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.007261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.007425] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.007447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.007460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.009520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.018803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.019221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.019434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.019462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.019478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.019610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.019789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.019819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.019833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.021768] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.031143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.031567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.031828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.031855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.031871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.032053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.032205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.032232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.032246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.034369] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.043563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.044041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.044286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.044312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.044328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.044493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.044676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.044697] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.044711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.046627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.055814] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.056199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.056439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.056465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.056481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.056627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.056800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.056843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.056858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.058905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.067961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.068367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.068582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.068608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.068625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.068816] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.069002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.069023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.069044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.071191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.080323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.080716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.080941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.080968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.080984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.081178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.081300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.081323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.081337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.083413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.092714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.093134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.093376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.093402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.093418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.093535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.093686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.093707] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.093721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.095691] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.105150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.105578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.105825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.105852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.105868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.106066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.106201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.106222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.106236] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.108290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.117411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.117808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.118073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.118099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.118115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.118293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.118457] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.018 [2024-07-20 17:22:13.118478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.018 [2024-07-20 17:22:13.118492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.018 [2024-07-20 17:22:13.120541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.018 [2024-07-20 17:22:13.129636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.018 [2024-07-20 17:22:13.130019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.130259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.018 [2024-07-20 17:22:13.130285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.018 [2024-07-20 17:22:13.130301] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.018 [2024-07-20 17:22:13.130466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.018 [2024-07-20 17:22:13.130600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.019 [2024-07-20 17:22:13.130622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.019 [2024-07-20 17:22:13.130638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.019 [2024-07-20 17:22:13.132728] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.019 [2024-07-20 17:22:13.142113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.019 [2024-07-20 17:22:13.142559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-20 17:22:13.142827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-20 17:22:13.142854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.019 [2024-07-20 17:22:13.142871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.019 [2024-07-20 17:22:13.143004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.019 [2024-07-20 17:22:13.143188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.019 [2024-07-20 17:22:13.143209] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.019 [2024-07-20 17:22:13.143223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.019 [2024-07-20 17:22:13.145332] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.019 [2024-07-20 17:22:13.154320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.019 [2024-07-20 17:22:13.154760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-20 17:22:13.155010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-20 17:22:13.155036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.019 [2024-07-20 17:22:13.155052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.019 [2024-07-20 17:22:13.155202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.019 [2024-07-20 17:22:13.155382] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.019 [2024-07-20 17:22:13.155404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.019 [2024-07-20 17:22:13.155417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.019 [2024-07-20 17:22:13.157540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.019 [2024-07-20 17:22:13.166574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.019 [2024-07-20 17:22:13.166948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-20 17:22:13.167191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.019 [2024-07-20 17:22:13.167218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.019 [2024-07-20 17:22:13.167234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.019 [2024-07-20 17:22:13.167350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.019 [2024-07-20 17:22:13.167467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.019 [2024-07-20 17:22:13.167487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.019 [2024-07-20 17:22:13.167501] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.019 [2024-07-20 17:22:13.169488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-20 17:22:13.178882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-20 17:22:13.179271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-20 17:22:13.179515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-20 17:22:13.179541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-20 17:22:13.179557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.277 [2024-07-20 17:22:13.179706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.277 [2024-07-20 17:22:13.179887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.277 [2024-07-20 17:22:13.179910] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.277 [2024-07-20 17:22:13.179924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.277 [2024-07-20 17:22:13.182004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.277 [2024-07-20 17:22:13.191187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.277 [2024-07-20 17:22:13.191634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-20 17:22:13.191885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.277 [2024-07-20 17:22:13.191913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.277 [2024-07-20 17:22:13.191929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.277 [2024-07-20 17:22:13.192096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.192262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.192284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.192298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.194391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.203439] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.203852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.204083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.204109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.204126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.204243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.204411] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.204432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.204446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.206640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.215741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.216205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.216479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.216506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.216522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.216685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.216877] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.216900] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.216915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.218917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.228127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.228527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.228768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.228806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.228825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.228990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.229144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.229181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.229195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.231341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.240342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.240758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.240975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.241002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.241018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.241198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.241315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.241337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.241351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.243390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.252622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.253020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.253236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.253262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.253279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.253426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.253558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.253580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.253594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.255651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.264894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.265344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.265578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.265604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.265625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.265759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.265937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.265960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.265974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.268115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.277192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.277599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.277863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.277890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.277906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.278040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.278192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.278213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.278227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.280302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.289397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.289803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.290064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.290090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.290106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.290272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.290423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.290444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.290458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.292611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.301677] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.302067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.302304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.302330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.302347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.302520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.302723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.302744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.302759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.304808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 17:22:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:57.278 17:22:13 -- common/autotest_common.sh@852 -- # return 0 00:29:57.278 17:22:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:57.278 17:22:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:57.278 17:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.278 [2024-07-20 17:22:13.313914] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.314326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.314581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.314607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.314624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.314790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.314936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.314958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.314973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.317220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.326264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.326710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.326958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.326985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.327001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.327166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 17:22:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.278 17:22:13 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:57.278 [2024-07-20 17:22:13.327352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.327375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.327390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 17:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.278 17:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.278 [2024-07-20 17:22:13.329429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.329451] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.278 17:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.278 17:22:13 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:57.278 17:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.278 17:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.278 [2024-07-20 17:22:13.338572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.338999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.339250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.339276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.339292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.339466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.339579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.339599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.339612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.341816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.350921] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.351394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.351632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.351671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.278 [2024-07-20 17:22:13.351688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.278 [2024-07-20 17:22:13.351876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.278 [2024-07-20 17:22:13.352029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.278 [2024-07-20 17:22:13.352051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.278 [2024-07-20 17:22:13.352065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.278 [2024-07-20 17:22:13.354014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.278 [2024-07-20 17:22:13.362961] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.278 [2024-07-20 17:22:13.363602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.363897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.278 [2024-07-20 17:22:13.363927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.279 [2024-07-20 17:22:13.363948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.279 [2024-07-20 17:22:13.364142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.279 [2024-07-20 17:22:13.364296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.279 [2024-07-20 17:22:13.364318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.279 [2024-07-20 17:22:13.364335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.279 [2024-07-20 17:22:13.366425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.279 Malloc0 00:29:57.279 17:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.279 17:22:13 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.279 17:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.279 17:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.279 [2024-07-20 17:22:13.375450] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.279 [2024-07-20 17:22:13.376001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-07-20 17:22:13.376255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-07-20 17:22:13.376282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.279 [2024-07-20 17:22:13.376300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.279 [2024-07-20 17:22:13.376425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.279 [2024-07-20 17:22:13.376611] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.279 [2024-07-20 17:22:13.376633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.279 [2024-07-20 17:22:13.376649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.279 [2024-07-20 17:22:13.378561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.279 17:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.279 17:22:13 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:57.279 17:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.279 17:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.279 [2024-07-20 17:22:13.387903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.279 [2024-07-20 17:22:13.388307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-07-20 17:22:13.388542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.279 [2024-07-20 17:22:13.388568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc92030 with addr=10.0.0.2, port=4420 00:29:57.279 [2024-07-20 17:22:13.388584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc92030 is same with the state(5) to be set 00:29:57.279 [2024-07-20 17:22:13.388733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc92030 (9): Bad file descriptor 00:29:57.279 [2024-07-20 17:22:13.388863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.279 [2024-07-20 17:22:13.388885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.279 [2024-07-20 17:22:13.388900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.279 17:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.279 17:22:13 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.279 17:22:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.279 17:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:57.279 [2024-07-20 17:22:13.391105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.279 [2024-07-20 17:22:13.393666] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.279 17:22:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.279 17:22:13 -- host/bdevperf.sh@38 -- # wait 664957 00:29:57.279 [2024-07-20 17:22:13.400312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.536 [2024-07-20 17:22:13.478746] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.635 00:30:05.635 Latency(us) 00:30:05.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.635 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:05.635 Verification LBA range: start 0x0 length 0x4000 00:30:05.635 Nvme1n1 : 15.01 8472.68 33.10 15639.03 0.00 5293.63 995.18 25049.32 00:30:05.635 =================================================================================================================== 00:30:05.635 Total : 8472.68 33.10 15639.03 0.00 5293.63 995.18 25049.32 00:30:05.893 17:22:21 -- host/bdevperf.sh@39 -- # sync 00:30:05.893 17:22:21 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.893 17:22:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.893 17:22:21 -- common/autotest_common.sh@10 -- # set +x 00:30:05.893 17:22:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.893 17:22:22 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:05.893 17:22:22 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:05.893 17:22:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:05.893 17:22:22 -- nvmf/common.sh@116 -- # sync 00:30:05.893 17:22:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:05.893 17:22:22 -- nvmf/common.sh@119 -- # set +e 00:30:05.893 17:22:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:05.893 17:22:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:05.893 rmmod nvme_tcp 00:30:05.893 rmmod nvme_fabrics 00:30:05.893 rmmod nvme_keyring 00:30:06.152 17:22:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:06.152 17:22:22 -- nvmf/common.sh@123 -- # set -e 00:30:06.152 17:22:22 -- nvmf/common.sh@124 -- # return 0 00:30:06.152 17:22:22 -- nvmf/common.sh@477 -- # '[' -n 665663 ']' 00:30:06.152 17:22:22 -- nvmf/common.sh@478 -- # killprocess 665663 00:30:06.152 17:22:22 -- common/autotest_common.sh@926 -- # '[' -z 665663 ']' 00:30:06.152 17:22:22 -- common/autotest_common.sh@930 -- # kill -0 665663 00:30:06.152 17:22:22 -- common/autotest_common.sh@931 -- # uname 00:30:06.152 17:22:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:06.152 17:22:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 665663 00:30:06.152 17:22:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:06.152 17:22:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:06.152 17:22:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 665663' 00:30:06.152 killing process with pid 665663 00:30:06.152 17:22:22 -- common/autotest_common.sh@945 -- # kill 665663 00:30:06.152 17:22:22 -- common/autotest_common.sh@950 -- # wait 665663 00:30:06.410 17:22:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:06.410 17:22:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:06.410 17:22:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:06.410 17:22:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.410 17:22:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:06.410 17:22:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.410 17:22:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.410 17:22:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.322 17:22:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:08.322 00:30:08.322 real 0m22.958s 00:30:08.322 user 1m0.427s 00:30:08.322 sys 0m4.805s 00:30:08.322 17:22:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.322 17:22:24 -- common/autotest_common.sh@10 -- # set +x 00:30:08.322 ************************************ 00:30:08.322 END TEST nvmf_bdevperf 00:30:08.322 ************************************ 00:30:08.322 17:22:24 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:08.322 17:22:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:08.322 17:22:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:08.322 17:22:24 -- common/autotest_common.sh@10 -- # set +x 00:30:08.322 ************************************ 00:30:08.322 START TEST nvmf_target_disconnect 00:30:08.322 ************************************ 00:30:08.322 17:22:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:08.322 * Looking for test storage... 00:30:08.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.322 17:22:24 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.322 17:22:24 -- nvmf/common.sh@7 -- # uname -s 00:30:08.322 17:22:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.322 17:22:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.322 17:22:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.322 17:22:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.322 17:22:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.322 17:22:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.322 17:22:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.322 17:22:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.322 17:22:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.322 17:22:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.322 17:22:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.580 17:22:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.580 17:22:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.580 17:22:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.580 17:22:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.580 17:22:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.580 17:22:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.580 17:22:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.580 17:22:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.580 17:22:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.580 17:22:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.580 17:22:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.580 17:22:24 -- paths/export.sh@5 -- # export PATH 00:30:08.580 17:22:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.580 17:22:24 -- nvmf/common.sh@46 -- # : 0 00:30:08.580 17:22:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:08.580 17:22:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:08.580 17:22:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:08.580 17:22:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.580 17:22:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.580 17:22:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:08.580 17:22:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:08.580 17:22:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:08.580 17:22:24 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:08.580 17:22:24 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:08.580 17:22:24 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:08.580 17:22:24 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:08.580 17:22:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:08.580 17:22:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.580 17:22:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:08.580 17:22:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:08.580 17:22:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:08.580 17:22:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.580 17:22:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.580 17:22:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.580 17:22:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:08.580 17:22:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:08.580 17:22:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:08.580 17:22:24 -- common/autotest_common.sh@10 -- # set +x 00:30:10.478 17:22:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:10.478 17:22:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:10.478 17:22:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:10.478 17:22:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:10.478 17:22:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:10.478 17:22:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:10.478 17:22:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:10.478 17:22:26 -- nvmf/common.sh@294 -- # net_devs=() 00:30:10.478 17:22:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:10.478 17:22:26 -- nvmf/common.sh@295 -- # e810=() 00:30:10.478 17:22:26 -- nvmf/common.sh@295 -- # local -ga e810 00:30:10.478 17:22:26 -- nvmf/common.sh@296 -- # x722=() 00:30:10.478 17:22:26 -- nvmf/common.sh@296 -- # local -ga x722 00:30:10.478 17:22:26 -- nvmf/common.sh@297 -- # mlx=() 00:30:10.478 17:22:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:10.478 17:22:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.478 17:22:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:10.478 17:22:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:10.478 17:22:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:10.478 17:22:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:10.478 17:22:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:10.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:10.478 17:22:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:10.478 17:22:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:10.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:10.478 17:22:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:10.478 17:22:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:10.478 17:22:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.478 17:22:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:10.478 17:22:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.478 17:22:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:10.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:10.478 17:22:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.478 17:22:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:10.478 17:22:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.478 17:22:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:10.478 17:22:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.478 17:22:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:10.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:10.478 17:22:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.478 17:22:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:10.478 17:22:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:10.478 17:22:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:10.478 17:22:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:10.478 17:22:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.478 17:22:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.478 17:22:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.478 17:22:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:10.478 17:22:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.478 17:22:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.478 17:22:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:10.478 17:22:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.478 17:22:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.478 17:22:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:10.478 17:22:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:10.478 17:22:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.478 17:22:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.478 17:22:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.478 17:22:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.478 17:22:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:10.478 17:22:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.478 17:22:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.478 17:22:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.478 17:22:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:10.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:30:10.478 00:30:10.478 --- 10.0.0.2 ping statistics --- 00:30:10.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.478 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:30:10.478 17:22:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:30:10.478 00:30:10.478 --- 10.0.0.1 ping statistics --- 00:30:10.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.479 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:30:10.479 17:22:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.479 17:22:26 -- nvmf/common.sh@410 -- # return 0 00:30:10.479 17:22:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:10.479 17:22:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.479 17:22:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:10.479 17:22:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:10.479 17:22:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.479 17:22:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:10.479 17:22:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:10.479 17:22:26 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:10.479 17:22:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:10.479 17:22:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:10.479 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:30:10.479 ************************************ 00:30:10.479 START TEST nvmf_target_disconnect_tc1 00:30:10.479 ************************************ 00:30:10.479 17:22:26 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:30:10.479 17:22:26 -- host/target_disconnect.sh@32 -- # set +e 00:30:10.479 17:22:26 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.479 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.479 [2024-07-20 17:22:26.569503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.479 [2024-07-20 17:22:26.569885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.479 [2024-07-20 17:22:26.569914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc8510 with addr=10.0.0.2, port=4420 00:30:10.479 [2024-07-20 17:22:26.569952] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:10.479 [2024-07-20 17:22:26.569978] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:10.479 [2024-07-20 17:22:26.569992] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:10.479 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:10.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:10.479 Initializing NVMe Controllers 00:30:10.479 17:22:26 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:10.479 17:22:26 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:10.479 17:22:26 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:30:10.479 17:22:26 -- common/autotest_common.sh@1132 -- # return 0 00:30:10.479 17:22:26 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:10.479 17:22:26 -- host/target_disconnect.sh@41 -- # set -e 00:30:10.479 00:30:10.479 real 0m0.091s 00:30:10.479 user 0m0.039s 00:30:10.479 sys 0m0.051s 00:30:10.479 17:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:10.479 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:30:10.479 ************************************ 00:30:10.479 END TEST nvmf_target_disconnect_tc1 00:30:10.479 ************************************ 00:30:10.479 17:22:26 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:10.479 17:22:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:10.479 17:22:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:10.479 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:30:10.479 ************************************ 00:30:10.479 START TEST nvmf_target_disconnect_tc2 00:30:10.479 ************************************ 00:30:10.479 17:22:26 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:30:10.479 17:22:26 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:10.479 17:22:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:10.479 17:22:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:10.479 17:22:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:10.479 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:30:10.479 17:22:26 -- nvmf/common.sh@469 -- # nvmfpid=668832 00:30:10.479 17:22:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:10.479 17:22:26 -- nvmf/common.sh@470 -- # waitforlisten 668832 00:30:10.479 17:22:26 -- common/autotest_common.sh@819 -- # '[' -z 668832 ']' 00:30:10.479 17:22:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.479 17:22:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:10.479 17:22:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.479 17:22:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:10.479 17:22:26 -- common/autotest_common.sh@10 -- # set +x 00:30:10.736 [2024-07-20 17:22:26.658888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:10.736 [2024-07-20 17:22:26.658958] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.736 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.736 [2024-07-20 17:22:26.723878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.736 [2024-07-20 17:22:26.808819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.736 [2024-07-20 17:22:26.808980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.736 [2024-07-20 17:22:26.808998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.736 [2024-07-20 17:22:26.809011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.736 [2024-07-20 17:22:26.809115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:10.736 [2024-07-20 17:22:26.809233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:10.736 [2024-07-20 17:22:26.809257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:10.736 [2024-07-20 17:22:26.809262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:11.667 17:22:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.667 17:22:27 -- common/autotest_common.sh@852 -- # return 0 00:30:11.667 17:22:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:11.667 17:22:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 17:22:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.667 17:22:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.667 17:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 Malloc0 00:30:11.667 17:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.667 17:22:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.667 17:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 [2024-07-20 17:22:27.676877] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.667 17:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.667 17:22:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.667 17:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 17:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.667 17:22:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.667 17:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 17:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.667 17:22:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.667 17:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 [2024-07-20 17:22:27.705185] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.667 17:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.667 17:22:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.667 17:22:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.667 17:22:27 -- common/autotest_common.sh@10 -- # set +x 00:30:11.667 17:22:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.667 17:22:27 -- host/target_disconnect.sh@50 -- # reconnectpid=668987 00:30:11.667 17:22:27 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:11.667 17:22:27 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:11.667 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.579 17:22:29 -- host/target_disconnect.sh@53 -- # kill -9 668832 00:30:13.579 17:22:29 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Write completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 [2024-07-20 17:22:29.731075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.579 Read completed with error (sct=0, sc=8) 00:30:13.579 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 [2024-07-20 17:22:29.731409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 [2024-07-20 17:22:29.731734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Read completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 Write completed with error (sct=0, sc=8) 00:30:13.580 starting I/O failed 00:30:13.580 [2024-07-20 17:22:29.732087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:13.580 [2024-07-20 17:22:29.732383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.732647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.732677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.732942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.733179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.733208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.733478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.733911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.733938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.734144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.734406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.734490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.734942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.735171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.735250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.735643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.735947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.735973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.736198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.736552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.736579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.736898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.737286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.580 [2024-07-20 17:22:29.737317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.580 qpair failed and we were unable to recover it. 00:30:13.580 [2024-07-20 17:22:29.737711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.737992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.738019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.738292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.738532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.738559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.738802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.739066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.739092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.739344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.739923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.739951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.740167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.740402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.740485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.740764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.741001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.741028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.741300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.741540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.741571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.741902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.742110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.742150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.742486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.742837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.742864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.743088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.743303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.743328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.743647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.743897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.743923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.744181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.744377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.744402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.744757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.745049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.745074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.745311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.745511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.745535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.745771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.746039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.746064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.746305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.746546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.746570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.746815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.747198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.747255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.747525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.747808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.747836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.748121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.748411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.748437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.748697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.748933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.748959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.749243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.749488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.749527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.749777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.750069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.750098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.750411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.750749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.750773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.751056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.751453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.751479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.751838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.752114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.752139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.752396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.752640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.752664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.752932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.753362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.753406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.753722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.753994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.754020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.754268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.754537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.754562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.754836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.755070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.755095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.755376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.755702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.755743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.756028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.756353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.756392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.756668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.756934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.756959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.757208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.757545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.757583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.757882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.758124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.758164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.758432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.758692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.758731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.759007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.759364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.759387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.759676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.759976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.760002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.760353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.760665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.760689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.846 qpair failed and we were unable to recover it. 00:30:13.846 [2024-07-20 17:22:29.760982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.846 [2024-07-20 17:22:29.761195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.761221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.761616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.761993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.762019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.762317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.762603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.762628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.762952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.763173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.763198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.763478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.763856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.763880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.764152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.764460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.764500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.764829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.765089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.765129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.765433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.765671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.765696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.765951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.766175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.766200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.766481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.766705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.766729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.767021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.767475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.767528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.767870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.768194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.768222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.768498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.768738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.768778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.769065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.769327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.769352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.769700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.769965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.770007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.770253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.770480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.770505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.770787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.771050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.771075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.771335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.771646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.771686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.772125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.772451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.772480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.772784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.773071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.773097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.773400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.773702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.773726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.773986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.774235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.774261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.774540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.774834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.774860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.775158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.775457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.775482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.775790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.776156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.776211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.776471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.776767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.776800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.777075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.777335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.777374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.777694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.778005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.778031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.778296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.778585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.778625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.778988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.779283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.779308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.779542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.779778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.779825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.780055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.780349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.780373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.780685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.780976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.781002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.781270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.781588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.781613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.781862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.782151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.782176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.782524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.782847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.782872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.783254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.783536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.783564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.783890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.784218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.784245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.784550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.784865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.784891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.785164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.785442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.785466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.785755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.786290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.786335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.786649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.786897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.786939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.787209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.787499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.787524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.787845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.788138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.788163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.788505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.788828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.788867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.789143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.789347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.789372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.789628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.789917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.789941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.790317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.790645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.790672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.791000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.791307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.791332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.791664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.791935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.791962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.792238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.792535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.792560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.792823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.793055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.793097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.793352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.793551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.793576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.793842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.794120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.794144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.794413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.794729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.794770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.795072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.795358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.795383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.795723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.796006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.796033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.796355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.796603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.796628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.796889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.797261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.797300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.797633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.797997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.798023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.798372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.798648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.798673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.798965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.799200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.799224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.799517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.799854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.799883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.800162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.800411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.800443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.800674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.800920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.800946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.801165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.801491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.801514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.801776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.802030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.802056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.802405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.802861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.802887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.847 [2024-07-20 17:22:29.803197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.803440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.847 [2024-07-20 17:22:29.803464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.847 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.803708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.803946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.803988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.804256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.804461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.804486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.804773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.805038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.805064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.805365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.805591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.805615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.805873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.806106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.806150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.806494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.806772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.806803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.807178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.807497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.807524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.807839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.808131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.808154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.808450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.808754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.808778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.809073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.809316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.809355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.809603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.809870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.809896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.810113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.810351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.810376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.810593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.810819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.810844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.811168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.811457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.811481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.811676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.811921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.811966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.812252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.812557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.812581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.812957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.813254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.813278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.813560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.813814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.813855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.814103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.814369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.814393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.814671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.814882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.814907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.815148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.815455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.815480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.815870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.816132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.816172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.816390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.816685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.816710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.816963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.817212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.817252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.817554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.817930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.817961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.818213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.818448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.818473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.818735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.819009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.819035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.819299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.819560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.819585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.819814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.820034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.820060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.820309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.820580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.820605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.820857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.821123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.821148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.821464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.821728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.821752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.822077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.822367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.822391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.822676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.823008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.823052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.823438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.823692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.823731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.823985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.824221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.824247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.824594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.824897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.824922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.825209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.825446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.825471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.825854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.826096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.826122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.826408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.826738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.826762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.827006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.827272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.827297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.827642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.827989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.828016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.828299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.828553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.828593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.828846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.829110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.829136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.829415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.829617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.829643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.829894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.830112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.830136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.830389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.830603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.830627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.830899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.831183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.831208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.831470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.831712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.831751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.832002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.832250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.832291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.832601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.832883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.832908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.833159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.833477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.833515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.833753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.834033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.834060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.834405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.834669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.834694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.835023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.835294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.835319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.835558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.835903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.835927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.836282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.836584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.836609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.836880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.837140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.848 [2024-07-20 17:22:29.837180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.848 qpair failed and we were unable to recover it. 00:30:13.848 [2024-07-20 17:22:29.837462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.837688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.837713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.837946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.838184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.838223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.838468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.838748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.838772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.839055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.839291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.839316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.839612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.839917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.839943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.840205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.840460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.840485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.840759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.841056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.841082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.841713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.842033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.842059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.842335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.842575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.842615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.842890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.843127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.843153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.843424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.843756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.843780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.844066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.844486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.844539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.844925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.845199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.845224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.845539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.845823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.845849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.846110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.846544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.846585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.846874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.847107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.847132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.847410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.847817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.847846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.848130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.848371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.848396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.848649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.848874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.848899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.849128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.849397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.849422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.849714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.850215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.850260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.850576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.850824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.850851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.851149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.851385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.851408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.851693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.851973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.851998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.852372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.852607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.852631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.852959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.853245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.853272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.853624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.853978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.854004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.854317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.854553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.854578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.854901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.855164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.855190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.855459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.855746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.855771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.856050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.856316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.856340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.856614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.856896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.856922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.857200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.857519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.857542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.857790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.858053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.858078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.858375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.858658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.858683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.858942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.859204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.859243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.859548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.859929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.859953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.860325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.860730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.860779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.861186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.861511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.861554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.861790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.862176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.862233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.862510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.862758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.862807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.863070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.863351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.863375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.863630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.863923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.863948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.864263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.864464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.864489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.864805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.865103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.865130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.865435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.865703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.865727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.866023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.866285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.866311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.866578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.866898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.866938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.867309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.867549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.867589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.867842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.868135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.868175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.868439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.868832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.868855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.869149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.869455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.869480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.869741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.869997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.870024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.870305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.870537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.870562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.870862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.871142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.871167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.871712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.871980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.872006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.872213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.872416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.872441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.872791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.873126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.873167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.873495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.873771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.873815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.874099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.874327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.874353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.874611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.874842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.874869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.875205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.875523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.875548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.875815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.876041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.876082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.876360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.876579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.876602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.876825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.877065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.877104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.877352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.877618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.877644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.878007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.878311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.878337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.878580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.878810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.878840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.879144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.879357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.879384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.879607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.879922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.879962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.880243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.880493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.880532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.849 qpair failed and we were unable to recover it. 00:30:13.849 [2024-07-20 17:22:29.880788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.849 [2024-07-20 17:22:29.881068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.881108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.881369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.881640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.881664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.882003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.882316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.882340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.882599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.882872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.882897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.883221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.883841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.883865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.884172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.884472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.884497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.884826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.885123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.885147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.885390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.885629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.885654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.885905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.886164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.886188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.886490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.886809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.886850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.887202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.887559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.887587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.887910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.888156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.888182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.888441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.888689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.888728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.889043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.889290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.889330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.889695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.889983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.890008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.890258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.890485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.890510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.890778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.891080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.891106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.891442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.891876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.891915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.892235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.892602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.892628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.892900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.893150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.893175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.893552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.893841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.893867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.894162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.894479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.894518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.894790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.895124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.895168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.895527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.895816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.895842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.896162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.896479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.896503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.896785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.897065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.897090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.897376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.897575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.897604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.897823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.898041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.898082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.898301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.898600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.898624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.899034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.899321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.899347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.899606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.899850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.899876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.900126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.900348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.900374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.900645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.900887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.900913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.901150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.901400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.901424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.901726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.902085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.902143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.902465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.902807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.902833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.903071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.903344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.903374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.903646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.903889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.903929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.904281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.904572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.904595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.904871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.905132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.905171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.905415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.905751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.905827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.906260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.906611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.906652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.906954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.907226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.907252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.907484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.907773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.907803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.908192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.908513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.908541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.908906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.909148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.909174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.909424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.909692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.909723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.910043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.910287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.910311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.910550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.910789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.910838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.911101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.911582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.911630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.911958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.912245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.912269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.912531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.912918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.912943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.913262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.913542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.913566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.913832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.914070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.914097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.914397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.914641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.914682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.915121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.915468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.915496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.915839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.916342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.916401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.916726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.916986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.917014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.917299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.917591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.917616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.917883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.918174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.918199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.918486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.918832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.918860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.919153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.919396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.919421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.919713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.919929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.919955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.920165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.920394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.920419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.920675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.920991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.921019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.850 [2024-07-20 17:22:29.921249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.921516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.850 [2024-07-20 17:22:29.921541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.850 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.921817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.922133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.922175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.922460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.922708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.922733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.923039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.923280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.923321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.923624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.923829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.923855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.924083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.924394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.924433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.924740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.924976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.925003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.925244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.925495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.925520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.925762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.926040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.926065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.926391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.926631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.926655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.926899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.927124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.927148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.927458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.927830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.927870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.928164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.928423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.928463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.928718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.928958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.928983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.929336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.929680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.929705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.930003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.930238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.930263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.930559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.930803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.930843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.931153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.931451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.931475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.931810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.932253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.932297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.932598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.932842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.932882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.933166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.933448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.933474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.933739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.934007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.934033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.934371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.934606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.934630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.934875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.935125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.935150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.935431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.935692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.935718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.935981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.936249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.936274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.936610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.936909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.936937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.937517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.937866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.937892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.938161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.938424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.938448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.938734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.938977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.939018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.939403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.939771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.939818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.940192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.940515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.940542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.940859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.941114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.941139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.941383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.941641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.941667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.941925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.942180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.942205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.942466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.942721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.942760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.943225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.943538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.943567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.943920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.944260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.944285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.944564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.944840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.944866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.945240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.945603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.945630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.945938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.946183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.946224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.946512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.946764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.946820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.947128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.947333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.947359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.947587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.947845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.947871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.948088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.948323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.948349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.948569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.948833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.948860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.949163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.949565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.949611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.949851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.950093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.950120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.950374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.950621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.950647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.950889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.951294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.951358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.951663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.952097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.952136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.952468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.952831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.952882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.953181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.953408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.953436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.953687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.953977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.954003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.954287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.954754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.954818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.955086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.955354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.955384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.955652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.955913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.955944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.956213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.956675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.956726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.957013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.957244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.957272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.957560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.957850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.957879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.958167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.958447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.958475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.958732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.959016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.959045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.959318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.959683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.959708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.959980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.960238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.960277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.960587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.960838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.960867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.961125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.961427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.961451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.961725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.961966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.961994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.962252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.962779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.962842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.963111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.963418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.963443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.963738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.964120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.964175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.964484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.964823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.964848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.965087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.965334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.965377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.851 qpair failed and we were unable to recover it. 00:30:13.851 [2024-07-20 17:22:29.965637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.851 [2024-07-20 17:22:29.965923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.965952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.966216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.966475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.966500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.966801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.967084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.967125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.967398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.967859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.967887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.968161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.968383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.968409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.968740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.969047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.969076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.969332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.969629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.969691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.969937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.970443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.970490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.970744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.970962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.970988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.971208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.971458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.971489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.971732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.972007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.972036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.972291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.972546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.972570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.972833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.973098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.973127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.973415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.973873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.973902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.974167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.974421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.974449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.974739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.975004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.975034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.975302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.975737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.975789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.976057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.976317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.976344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.976656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.976905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.976931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.977204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.977468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.977492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.977804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.978085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.978113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.978392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.978863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.978889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.979179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.979595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.979646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.979903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.980440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.980490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.980781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.981038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.981066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.981322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.981774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.981869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.982100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.982432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.982460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.982748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.982952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.982978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.983217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.983443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.983468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.983680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.983915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.983942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.984188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.984468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.984493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.984735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.984945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.984973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.985183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.985431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.985456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.985657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.985890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.985916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.986135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.986352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.986379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.986639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.986881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.986906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.987123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.987358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.987383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.987596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.987861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.987888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.988095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.988301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.988326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.988562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.988804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.988830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.989070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.989334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.989360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.989565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.989777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.989809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.990045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.990278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.990303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.990520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.990757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.990782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.991014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.991230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.991270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.991571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.991786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.991835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.992084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.992362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.992390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.992646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.992885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.992912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.993152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.993376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.993403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.993647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.993852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.993877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.994084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.994484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.994522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.994786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.995036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.995062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.995299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.995512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.852 [2024-07-20 17:22:29.995695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:13.852 qpair failed and we were unable to recover it. 00:30:13.852 [2024-07-20 17:22:29.995941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.996152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.996177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.996437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.996662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.996687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.996903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.997174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.997201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.997437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.997695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.997720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.998124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.998332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.998358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.998589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.998815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.998842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.999056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.999273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.999301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:29.999630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.999898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:29.999930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.000165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.000430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.000456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.000688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.000901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.000927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.001137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.001351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.001377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.001643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.001908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.001936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.002148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.002375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.002400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.002633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.002870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.002896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.003103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.003335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.003361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.003600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.003822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.003849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.004064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.004296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.004323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.004569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.004848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.004880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.005089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.005318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.005343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.005596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.005869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.005895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.006110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.006346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.006371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.006573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.006770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.006803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.007056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.007350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.007375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.118 qpair failed and we were unable to recover it. 00:30:14.118 [2024-07-20 17:22:30.007616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.007823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.118 [2024-07-20 17:22:30.007850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.008069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.008301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.008327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.008546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.008749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.008776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.009045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.009283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.009309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.009577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.009799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.009830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.010047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.010263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.010290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.010510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.010738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.010765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.010984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.011192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.011218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.011485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.011748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.011778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.012019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.012247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.012275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.012512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.012741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.012771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.013175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.013409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.013434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.013672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.013898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.013926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.014167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.014388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.014414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.014626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.014910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.014936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.015154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.015424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.015449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.015706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.015921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.015947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.016189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.016413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.016439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.016677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.016909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.016936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.017141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.017347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.017374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.017612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.017843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.017870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.018084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.018317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.018342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.018579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.018813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.018840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.019053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.019297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.019324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.019565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.019834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.019860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.020583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.020862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.020898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.021189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.021452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.021493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.021761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.022021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.022054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.022354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.022675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.022727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.023040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.023306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.023354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.119 [2024-07-20 17:22:30.023595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.023811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.119 [2024-07-20 17:22:30.023838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.119 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.024126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.024346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.024372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.024586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.024826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.024853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.025107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.025371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.025397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.025660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.025883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.025910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.026156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.026399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.026426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.026697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.026983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.027010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.027228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.027468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.027494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.027703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.027938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.027966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.028212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.028470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.028498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.028732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.028953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.028979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.029188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.029423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.029448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.029713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.029924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.029968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.030259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.030496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.030522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.030757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.031026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.031053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.031337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.031571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.031596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.031805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.032016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.032042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.032249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.032508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.032535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.032769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.032986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.033012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.033250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.033459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.033486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.033754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.033966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.033995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.034215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.034448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.034474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.034708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.034955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.034984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.035245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.035475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.035501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.035770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.036011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.036036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.036243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.036485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.036511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.036730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.036950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.036976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.037185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.037394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.037419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.037676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.037893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.037922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.038132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.038359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.038385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.038591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.038826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.038871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.120 qpair failed and we were unable to recover it. 00:30:14.120 [2024-07-20 17:22:30.039115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.120 [2024-07-20 17:22:30.039321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.039347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.039573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.039773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.039803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.040048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.040339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.040371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.040720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.041020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.041052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.041322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.041857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.041887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.042151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.042388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.042417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.042678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.042939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.042968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.043222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.043526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.043555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.043814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.044103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.044131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.044422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.044709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.044737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.045028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.045234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.045260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.045467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.045728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.045771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.046048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.046309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.046337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.046623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.046911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.046940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.047197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.047509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.047540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.047829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.048268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.048312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.048609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.048875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.048905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.049170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.049426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.049465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.049905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.050165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.050193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.050500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.050936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.050964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.051229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.051637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.051691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.051961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.052217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.052260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.052788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.053098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.053126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.053419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.053848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.053879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.054123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.054495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.054551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.054892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.055180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.055208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.055466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.055919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.055947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.056213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.056511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.056542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.056820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.057032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.057058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.057289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.057627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.057682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.057966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.058203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.121 [2024-07-20 17:22:30.058229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.121 qpair failed and we were unable to recover it. 00:30:14.121 [2024-07-20 17:22:30.058515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.058817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.058846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.059131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.059574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.059625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.059952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.060240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.060267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.060522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.060788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.060824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.061239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.061856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.061888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.062182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.062586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.062648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.062926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.063166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.063193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.063462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.063806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.063853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.064133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.064559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.064583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.064892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.065135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.065162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.065397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.065888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.065917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.066198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.066663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.066718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.066998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.067308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.067364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.067662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.067937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.067972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.068228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.068578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.068639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.068932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.069180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.069211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.069465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.069849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.069884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.070147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.070507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.070555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.070807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.071065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.071095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.071343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.071861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.071890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.072225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.072494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.072539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.072815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.073051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.073079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.073344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.073592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.073622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.073963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.074292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.074344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.074579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.074831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.074860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.122 [2024-07-20 17:22:30.075123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.075453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.122 [2024-07-20 17:22:30.075480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.122 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.075743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.076007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.076033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.076421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.076872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.076901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.077132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.077586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.077636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.077919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.078215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.078243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.078527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.078817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.078841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.079125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.079669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.079720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.080005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.080291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.080319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.080686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.081006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.081040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.081282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.081593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.081637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.081902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.082213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.082273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.082574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.082845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.082873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.083132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.083502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.083554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.083800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.084075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.084100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.084378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.084640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.084670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.084960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.085461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.085513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.085776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.086041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.086071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.086359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.086872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.086900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.087165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.087489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.087554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.087850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.088144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.088168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.088423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.088733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.088761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.089035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.089519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.089571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.089839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.090106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.090134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.090398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.090727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.090779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.091050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.091555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.091609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.091905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.092171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.092204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.092468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.092900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.092930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.093164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.093537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.093592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.093852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.094149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.094180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.094396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.094647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.094672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.094943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.095184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.123 [2024-07-20 17:22:30.095220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.123 qpair failed and we were unable to recover it. 00:30:14.123 [2024-07-20 17:22:30.095511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.095760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.095789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.096090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.096551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.096604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.096876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.097087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.097112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.097353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.097587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.097612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.097844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.098087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.098117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.098381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.098671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.098745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.099042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.099316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.099344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.099611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.099865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.099897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.100180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.100582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.100639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.100901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.101224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.101286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.101567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.101870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.101899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.102143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.102403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.102431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.102714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.103022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.103050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.103336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.103822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.103851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.104135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.104392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.104436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.104696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.104983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.105010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.105254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.105544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.105591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.105853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.106083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.106112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.106379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.106877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.106907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.107173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.107635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.107691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.107955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.108307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.108331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.108574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.108803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.108828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.109118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.109506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.109543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.109788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.110013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.110038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.110270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.110725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.110776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.111081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.111400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.111428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.111685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.111912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.111938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.112241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.112754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.112814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.113072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.113286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.113311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.113521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.113826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.113854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.124 qpair failed and we were unable to recover it. 00:30:14.124 [2024-07-20 17:22:30.114081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.114339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.124 [2024-07-20 17:22:30.114379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.114641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.114968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.114996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.115279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.115817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.115865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.116152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.116630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.116679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.116936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.117197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.117226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.117496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.117783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.117819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.118071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.118398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.118435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.118690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.118933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.118974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.119265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.119624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.119651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.119914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.120390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.120444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.120738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.121049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.121075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.121327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.121818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.121846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.122133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.122566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.122617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.122890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.123180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.123207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.123509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.123787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.123816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.124082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.124446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.124501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.124786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.125054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.125081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.125374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.125654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.125681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.125989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.126245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.126273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.126532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.126830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.126855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.127132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.127534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.127585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.127827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.128113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.128139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.128379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.128615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.128657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.128914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.129220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.129274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.129527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.129812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.129839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.130079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.130505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.130558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.130826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.131089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.131115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.131353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.131629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.131655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.131898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.132135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.132160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.132378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.132668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.132713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.132974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.133211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.133235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.125 qpair failed and we were unable to recover it. 00:30:14.125 [2024-07-20 17:22:30.133479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.125 [2024-07-20 17:22:30.133750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.133775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.133995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.134233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.134258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.134493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.134730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.134754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.135030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.135236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.135260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.135568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.135771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.135806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.136045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.136343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.136367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.136575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.136800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.136826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.137096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.137370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.137394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.137629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.137839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.137864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.138089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.138323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.138349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.138592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.138876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.138901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.139111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.139348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.139373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.139611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.139898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.139923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.140134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.140370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.140395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.140627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.140835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.140879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.141143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.141344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.141368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.141613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.141892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.141918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.142138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.142492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.142517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.142731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.142984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.143013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.143273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.143599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.143645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.143931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.144193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.144218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.144447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.144685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.144709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.144974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.145265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.145315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.145571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.145831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.145857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.146065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.146348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.146375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.146639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.146877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.146902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.126 [2024-07-20 17:22:30.147144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.147381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.126 [2024-07-20 17:22:30.147406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.126 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.147690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.147944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.147971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.148186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.148448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.148476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.148710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.148948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.148991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.149252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.149748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.149808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.150072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.150284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.150309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.150525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.150738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.150764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.150989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.151201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.151227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.151441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.151711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.151734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.151949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.152157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.152184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.152399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.152603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.152628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.152869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.153072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.153097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.153380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.153780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.153869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.154126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.154628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.154676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.154937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.155415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.155466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.155754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.156006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.156034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.156275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.156713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.156764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.157008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.157473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.157524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.157814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.158101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.158129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.158412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.158911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.158939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.159201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.159452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.159494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.159745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.160028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.160066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.160352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.160809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.160868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.161095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.161622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.161670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.161967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.162196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.162224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.162468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.162708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.162733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.163012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.163449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.163501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.163775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.164050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.164078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.164341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.164821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.164877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.165123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.165338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.165364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.165582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.165882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.165911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.127 qpair failed and we were unable to recover it. 00:30:14.127 [2024-07-20 17:22:30.166149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.169332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.127 [2024-07-20 17:22:30.169369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.169659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.169916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.169946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.170232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.170707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.170753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.171073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.171286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.171313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.171582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.171818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.171852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.172098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.172329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.172356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.172611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.172875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.172901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.173128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.173419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.173448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.173820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.174119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.174144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.174407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.174873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.174902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.175159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.175426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.175451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.175764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.176067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.176095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.176330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.176751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.176809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.177060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.177335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.177360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.177614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.177882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.177910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.178146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.178408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.178436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.178681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.178943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.178978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.179219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.179553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.179591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.179889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.180367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.180411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.180712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.180990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.181019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.181453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.181832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.181868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.182159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.182455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.182506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.182775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.183037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.183065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.183354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.183628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.183653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.183902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.184122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.184146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.184425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.184888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.184916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.185169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.185576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.185625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.185883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.186157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.186184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.186448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.186895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.186924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.187196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.187635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.187686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.187944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.188206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.188239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.128 qpair failed and we were unable to recover it. 00:30:14.128 [2024-07-20 17:22:30.188507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.128 [2024-07-20 17:22:30.188826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.188861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.189155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.189415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.189440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.189704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.189979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.190007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.190269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.190694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.190745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.191058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.191419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.191442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.191723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.192013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.192041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.192277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.192531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.192558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.192851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.193134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.193158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.193465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.193724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.193750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.194003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.194247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.194282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.194574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.194884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.194910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.195145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.195590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.195637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.195915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.196171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.196199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.196481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.196774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.196810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.197077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.197539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.197588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.197914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.198174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.198201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.198438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.198878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.198903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.199211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.199755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.199811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.200090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.200419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.200459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.200695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.201043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.201077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.201373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.201843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.201871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.202138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.202668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.202716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.202999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.203529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.203581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.203934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.204168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.204198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.204490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.204741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.204771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.205077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.205381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.205431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.205722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.205937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.205962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.206196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.206666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.206715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.206980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.207240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.207267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.207525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.207762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.207789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.208083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.208346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.129 [2024-07-20 17:22:30.208374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.129 qpair failed and we were unable to recover it. 00:30:14.129 [2024-07-20 17:22:30.208595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.208869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.208897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.209150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.209442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.209474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.209765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.210034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.210064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.210297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.210772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.210855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.211093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.211415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.211461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.211922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.212182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.212214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.212480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.212742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.212771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.213041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.213456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.213504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.213760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.214045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.214070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.214415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.214746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.214811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.215111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.215575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.215625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.215888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.216098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.216139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.216420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.216880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.216908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.217169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.217707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.217756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.218026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.218501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.218547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.218854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.219080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.219107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.219362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.219703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.219728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.220093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.220658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.220709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.220958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.221190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.221218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.221512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.221928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.221957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.222237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.222758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.222817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.223111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.223582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.223632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.223923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.224186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.224214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.224480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.224881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.224911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.225175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.225647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.225698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.225962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.226491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.226541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.226812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.227080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.227107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.227381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.227736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.227763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.228036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.228265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.228289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.228564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.228832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.228862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.130 qpair failed and we were unable to recover it. 00:30:14.130 [2024-07-20 17:22:30.229129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.229367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.130 [2024-07-20 17:22:30.229411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.229674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.229921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.229949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.230260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.230507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.230535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.230821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.231084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.231124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.231381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.231753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.231780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.232053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.232425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.232452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.232711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.232948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.232974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.233185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.233425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.233450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.233687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.233960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.233986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.234263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.234521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.234561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.234815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.235041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.235068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.235309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.235546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.235571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.235814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.236124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.236150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.236391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.236856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.236883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.237106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.237347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.237374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.237602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.237859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.237888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.238154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.238435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.238459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.238672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.238882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.238906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.239140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.239619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.239643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.239930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.240171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.240214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.240489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.240724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.240749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.240998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.241238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.241264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.241506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.241766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.241803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.242067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.242320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.242347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.242559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.242833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.242860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.243065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.243279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.243305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.243585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.243892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.131 [2024-07-20 17:22:30.243918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.131 qpair failed and we were unable to recover it. 00:30:14.131 [2024-07-20 17:22:30.244156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.244417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.244443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.244677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.244941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.244969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.245253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.245458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.245483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.245718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.245974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.246000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.246248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.246529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.246574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.246837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.247084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.247109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.247372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.247602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.247626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.247898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.248101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.248127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.248394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.248614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.248641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.248899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.249126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.249154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.249410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.249624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.249649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.249872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.250169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.250194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.250438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.250681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.250707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.250989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.251259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.251304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.251568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.251850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.251878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.252113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.252430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.252475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.252715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.252971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.252996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.253224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.253521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.253566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.253823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.254086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.254112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.254349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.254652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.254697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.254991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.255291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.255315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.255581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.255844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.255869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.256102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.256431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.256475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.256707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.256967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.256995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.257258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.257566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.257594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.257897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.258159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.258186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.258477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.258740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.258769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.259049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.259282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.259307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.259610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.259861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.259889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.260145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.260431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.260459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.132 [2024-07-20 17:22:30.260692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.260923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.132 [2024-07-20 17:22:30.260949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.132 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.261184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.261437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.261464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.261744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.261994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.262020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.262253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.262540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.262568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.262852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.263130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.263156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.263446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.263674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.263700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.263985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.264203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.264230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.264466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.264721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.264747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.265032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.265309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.265334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.265539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.265812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.265855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.266069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.266298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.266326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.266545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.266811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.266837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.267056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.267326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.267353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.267610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.267862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.133 [2024-07-20 17:22:30.267888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.133 qpair failed and we were unable to recover it. 00:30:14.133 [2024-07-20 17:22:30.268139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.268528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.268555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.268852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.269071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.269113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.269340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.269582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.269608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.269860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.270069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.270093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.270294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.270506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.270530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.270769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.271037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.271062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.271325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.271551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.271575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.271815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.272056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.272081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.272309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.272576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.272601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.272844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.273060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.273084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.273313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.273566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.273591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.273807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.274043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.274069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.274303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.274518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.274543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.274757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.274995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.275020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.275232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.275490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.275514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.275727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.275964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.275990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.276245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.276478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.276503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.276767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.277028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.277054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.277263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.277504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.277534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.277774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.278021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.278048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.278270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.278513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.278537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.400 qpair failed and we were unable to recover it. 00:30:14.400 [2024-07-20 17:22:30.278755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.400 [2024-07-20 17:22:30.278981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.279008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.279248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.279459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.279485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.279725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.279943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.279970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.280176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.280379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.280403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.280653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.280891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.280916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.281158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.281392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.281417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.281659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.281895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.281921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.282133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.282391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.282420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.282677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.282900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.282926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.283159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.283386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.283410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.283674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.283922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.283947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.284179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.284410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.284435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.284679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.284945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.284970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.285183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.285392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.285417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.285655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.285889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.285914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.286123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.286353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.286377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.286612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.286815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.286840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.287074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.287306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.287335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.287582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.287813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.287839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.288082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.288313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.288340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.288572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.288808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.288833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.289046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.289271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.289296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.289562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.289847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.289874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.290137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.290399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.290424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.290664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.290961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.290988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.291279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.291587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.291648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.291925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.292222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.292274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.292503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.292802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.292833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.293086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.293468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.293495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.293766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.294017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.294043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.294305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.294537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.401 [2024-07-20 17:22:30.294562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.401 qpair failed and we were unable to recover it. 00:30:14.401 [2024-07-20 17:22:30.294832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.295109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.295137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.295426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.295729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.295774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.296011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.296304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.296370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.296633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.296926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.296952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.297193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.297398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.297423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.297686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.297929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.297954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.298198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.298428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.298455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.298723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.298962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.298990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.299265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.299628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.299673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.299943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.300221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.300270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.300521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.300764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.300791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.301071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.301362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.301388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.301623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.301893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.301919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.302159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.302444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.302508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.302768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.303012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.303038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.303298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.303687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.303739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.304023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.304264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.304288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.304531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.304779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.304812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.305057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.305323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.305347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.305555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.305853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.305879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.306132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.306463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.306491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.306709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.306954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.306981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.307241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.307599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.307643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.307904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.308162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.308186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.308534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.308855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.308881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.309141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.309400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.309427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.309713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.309977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.310002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.310268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.310733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.310787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.311060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.311348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.311377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.311701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.311989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.312017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.402 qpair failed and we were unable to recover it. 00:30:14.402 [2024-07-20 17:22:30.312304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.402 [2024-07-20 17:22:30.312547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.312589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.312850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.313128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.313158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.313444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.313894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.313921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.314287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.314762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.314822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.315142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.315424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.315469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.315776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.316041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.316082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.316342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.316784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.316849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.317131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.317587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.317638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.317934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.318195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.318223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.318478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.318898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.318924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.319354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.319842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.319894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.320129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.320369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.320411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.320702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.320984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.321011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.321260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.321518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.321559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.321817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.322095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.322124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.322414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.322821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.322875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.323129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.323560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.323606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.323868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.324123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.324149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.324420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.324879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.324907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.325141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.325473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.325517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.325824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.326060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.326102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.326327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.326759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.326820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.327061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.327326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.327356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.327624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.327929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.327955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.328202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.328710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.328759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.329006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.329201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.329226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.329496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.329759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.329786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.330051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.330294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.330336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.330623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.330915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.330942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.331199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.331732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.331780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.332055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.332597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.332647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.403 qpair failed and we were unable to recover it. 00:30:14.403 [2024-07-20 17:22:30.332953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.403 [2024-07-20 17:22:30.333326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.333376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.333614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.333894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.333924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.334193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.334692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.334742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.335032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.335308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.335336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.335603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.335882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.335906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.336147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.336416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.336443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.336703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.336942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.336970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.337260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.337763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.337821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.338086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.338356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.338384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.338651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.338945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.338973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.339222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.339503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.339533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.339769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.340016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.340046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.340306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.340534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.340564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.340868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.341134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.341163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.341422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.341654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.341683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.341944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.342201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.342230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.342472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.342729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.342756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.343005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.343239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.343268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.343586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.343898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.343931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.344232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.344463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.344487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.344770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.345017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.345045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.345370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.345633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.345660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.345950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.346223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.346250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.346512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.346730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.346759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.347006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.347278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.347306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.404 qpair failed and we were unable to recover it. 00:30:14.404 [2024-07-20 17:22:30.347582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.404 [2024-07-20 17:22:30.347865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.347894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.348124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.348363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.348404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.348660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.348886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.348914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.349139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.349399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.349427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.349692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.349952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.349981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.350242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.350460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.350485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.350769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.351005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.351034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.351300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.351522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.351550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.351814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.352054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.352082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.352341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.352601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.352629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.352890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.353178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.353206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.353466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.353741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.353768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.354058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.354328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.354360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.354664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.354981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.355010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.355290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.355563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.355591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.355893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.356116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.356142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.356444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.356711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.356737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.356998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.357270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.357299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.357591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.357837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.357863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.358076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.358363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.358392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.358685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.358952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.358982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.359227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.359539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.359568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.359860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.360142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.360171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.360463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.360755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.360784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.361069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.361334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.361362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.361627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.362804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.362835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.363111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.363363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.363392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.363633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.363925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.363953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.364192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.364457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.364485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.364777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.365024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.365053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.365295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.365564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.405 [2024-07-20 17:22:30.365593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.405 qpair failed and we were unable to recover it. 00:30:14.405 [2024-07-20 17:22:30.365863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.367804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.367838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.368140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.368417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.368445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.368680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.368976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.369006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.369257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.369544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.369570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.369861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.370157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.370188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.370468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.370732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.370760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.371006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.371279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.371308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.371577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.371818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.371846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.372094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.372357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.372385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.372663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.372963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.372990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.373242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.373542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.373575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.373853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.374141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.374169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.374439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.376807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.376839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.377136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.377414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.377445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.377729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.378002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.378031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.378323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.378579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.378606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.378913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.379247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.379290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.379599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.379865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.379895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.380176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.380421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.380446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.380701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.380961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.380989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.381254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.381505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.381537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.381802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.382046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.382071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.382350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.382645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.382674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.382925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.383215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.383241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.383536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.383754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.383779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.384069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.384361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.384389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.384643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.384923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.384952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.385805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.386104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.386135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.386406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.386657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.386686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.386936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.387177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.387205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.388261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.388529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.406 [2024-07-20 17:22:30.388563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.406 qpair failed and we were unable to recover it. 00:30:14.406 [2024-07-20 17:22:30.388832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.389119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.389145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.389449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.389885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.389914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.390179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.390631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.390677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.390985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.391240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.391268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.391529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.391960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.391989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.392254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.392664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.392712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.392979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.393442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.393493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.393788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.394046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.394074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.394333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.394602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.394629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.394890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.395127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.395155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.395401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.395640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.395668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.395923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.396166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.396196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.396478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.396755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.396780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.397061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.397378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.397426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.397713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.397989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.398018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.398280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.398672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.398720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.398981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.399229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.399270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.399557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.399813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.399841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.400106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.400385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.400413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.400678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.400929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.400957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.401230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.401527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.401555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.401824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.402066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.402097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.402359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.402622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.402649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.402954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.403204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.403232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.403474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.403719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.403747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.404024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.404284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.404314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.404569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.404905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.404931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.405171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.405437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.405465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.405735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.405984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.406011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.406305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.406587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.406614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.407 [2024-07-20 17:22:30.406897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.407158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.407 [2024-07-20 17:22:30.407185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.407 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.407460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.407723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.407751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.408038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.408339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.408380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.408634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.408926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.408955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.409247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.409605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.409633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.410871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.411159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.411188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.411446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.411728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.411761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.412088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.412406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.412453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.412693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.412965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.412994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.413277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.413599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.413626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.413900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.414202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.414231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.414519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.414864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.414894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.415164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.415429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.415457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.415770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.416085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.416113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.416383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.416850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.416881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.417147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.417620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.417676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.417942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.418176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.418207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.418469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.418699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.418727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.419002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.419263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.419291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.419587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.419864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.419893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.420158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.420461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.420491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.420767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.421056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.421084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.421340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.421875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.421904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.422165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.422466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.422511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.422770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.423076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.423103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.423451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.423748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.423807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.424063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.424359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.424406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.408 qpair failed and we were unable to recover it. 00:30:14.408 [2024-07-20 17:22:30.424742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.408 [2024-07-20 17:22:30.425019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.425048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.425307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.425564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.425592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.425842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.426114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.426142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.426416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.426678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.426702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.426974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.427471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.427522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.427864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.428113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.428141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.428426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.428670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.428697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.428987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.429426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.429480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.429812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.430075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.430103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.430388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.430721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.430748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.431008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.431275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.431305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.431569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.431831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.431860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.432103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.432362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.432390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.433109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.433829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.433861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.434126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.434431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.434461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.434807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.435070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.435100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.435397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.435628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.435655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.437157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.437606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.437637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.437906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.438211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.438261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.438526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.438855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.438885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.439158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.439526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.439575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.439852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.440136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.440165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.440434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.440762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.440790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.441043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.441409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.441460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.441740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.441992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.442022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.442313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.442856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.442885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.443143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.443442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.443492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.443780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.444045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.444074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.444357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.444587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.444614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.444920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.445189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.445216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.409 qpair failed and we were unable to recover it. 00:30:14.409 [2024-07-20 17:22:30.445600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.445923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.409 [2024-07-20 17:22:30.445949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.446241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.446639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.446664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.446977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.447229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.447257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.447519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.447823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.447849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.448132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.448416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.448445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.448707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.448962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.448987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.449245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.449544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.449591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.449857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.450096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.450124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.450401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.450808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.450842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.451137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.451425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.451472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.451745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.452038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.452064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.452357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.452706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.452750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.453028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.453260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.453286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.453648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.453948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.453978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.454267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.454581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.454635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.454894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.455204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.455261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.455593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.455888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.455917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.456180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.456463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.456488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.456756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.457019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.457050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.457281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.457508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.457532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.457810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.458043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.458069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.458362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.458874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.458899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.459150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.459407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.459452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.459752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.460111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.460155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.460450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.460742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.460767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.461044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.461356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.461381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.461656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.461901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.461928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.462191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.462462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.462490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.462748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.462999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.463026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.463305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.463829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.463854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.464144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.464620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.410 [2024-07-20 17:22:30.464671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.410 qpair failed and we were unable to recover it. 00:30:14.410 [2024-07-20 17:22:30.464970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.465304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.465359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.465673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.465960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.465986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.466292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.466565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.466612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.466878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.467153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.467178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.467520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.467807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.467836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.468102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.468424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.468452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.468720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.468988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.469017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.469290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.469638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.469697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.469996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.470369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.470396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.470664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.470913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.470939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.471193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.471566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.471590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.471863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.472322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.472367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.472681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.472963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.472998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.473272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.473571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.473606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.473867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.474143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.474172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.474467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.474903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.474932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.475206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.475496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.475524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.475888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.476121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.476152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.476455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.476823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.476849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.477178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.477547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.477601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.477882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.478106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.478131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.478423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.478789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.478826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.479067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.479364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.479397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.479688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.479917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.479946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.480222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.480557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.480607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.480873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.481130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.481157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.481448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.481719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.481747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.482018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.482984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.483015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.483236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.483514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.483539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.483814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.484057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.484091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.411 [2024-07-20 17:22:30.484395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.484632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.411 [2024-07-20 17:22:30.484674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.411 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.485024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.485246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.485271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.485510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.486017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.486051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.486325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.486572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.486597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.486832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.487094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.487119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.487351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.487568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.487593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.487911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.488156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.488181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.488431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.488724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.488772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.489034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.489249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.489274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.489495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.489726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.489751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.490015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.490259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.490285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.490498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.490768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.490810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.491052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.491269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.491299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.491565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.491819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.491846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.492116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.492476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.492501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.492806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.493037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.493062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.493303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.493551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.493576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.493823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.494063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.494093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.494346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.494600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.494628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.494873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.495107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.495132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.495362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.495629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.495656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.495891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.496144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.496180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.496392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.496591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.496616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.496869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.497218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.497263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.497518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.497835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.497862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.498102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.498402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.498428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.498649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.498901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.498927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.499169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.499412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.499439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.499657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.499897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.499923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.500136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.500343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.500369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.500635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.500887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.500913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.412 qpair failed and we were unable to recover it. 00:30:14.412 [2024-07-20 17:22:30.501157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.501383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.412 [2024-07-20 17:22:30.501408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.501645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.501871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.501898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.502148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.502362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.502389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.502678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.502927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.502952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.503166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.503387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.503412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.503631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.503900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.503930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.504172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.504529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.504601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.504853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.505095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.505121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.505370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.505641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.505697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.505967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.506292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.506317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.506534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.506851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.506877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.507140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.507375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.507400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.507679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.507948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.507975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.508224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.508440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.508466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.508733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.508974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.509000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.509242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.509458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.509483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.509717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.509984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.510011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.510226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.510501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.510529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.510817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.511124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.511152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.511418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.511681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.511710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.512040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.512390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.512427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.512673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.512898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.512933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.513158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.513440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.513476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.513720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.513940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.513966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.514173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.514384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.514411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.514711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.514930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.514956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.515161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.515385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.413 [2024-07-20 17:22:30.515428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.413 qpair failed and we were unable to recover it. 00:30:14.413 [2024-07-20 17:22:30.515666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.515872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.515898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.516154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.516386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.516414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.516673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.516948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.516973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.517187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.517459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.517487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.517733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.517990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.518016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.518238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.518442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.518468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.518700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.518915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.518943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.519174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.519388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.519413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.519672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.519952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.519982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.520251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.520467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.520492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.520752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.520985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.521011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.521239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.521499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.521524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.521738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.522039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.522065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.522285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.522517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.522545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.522830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.523041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.523066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.523372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.523581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.523617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.523856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.524082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.524114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.524346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.524546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.524571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.524825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.525097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.525126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.525369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.525626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.525651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.525936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.526188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.526216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.526495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.526814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.526840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.527120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.527337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.527363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.527657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.527902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.527929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.528173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.528410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.528444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.528681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.528994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.529020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.529270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.529489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.529531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.529771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.530062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.530087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.530298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.530507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.530532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.530828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.531068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.531096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.414 [2024-07-20 17:22:30.531347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.531554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.414 [2024-07-20 17:22:30.531579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.414 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.531887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.532124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.532149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.532358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.532567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.532594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.532878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.533103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.533132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.533363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.533818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.533875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.534108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.534363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.534391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.534651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.534910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.534953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.535205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.535447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.535481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.535772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.536032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.536062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.536314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.536575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.536603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.536856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.537093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.537124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.537399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.537729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.537757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.538013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.538384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.538436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.538699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.538963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.538991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.539254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.539518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.539553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.539814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.540064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.540098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.540370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.540658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.540687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.540935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.541163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.541191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.541449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.541711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.541739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.541990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.542295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.542323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.542592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.542848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.542877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.543160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.543469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.543528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.543808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.544173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.544218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.544548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.544829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.544856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.545074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.545282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.545308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.545707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.545970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.545998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.546294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.546565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.546596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.546865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.547139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.547179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.547445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.547857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.547889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.548152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.548483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.548513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.548763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.549050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.549079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.415 qpair failed and we were unable to recover it. 00:30:14.415 [2024-07-20 17:22:30.549436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.549701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.415 [2024-07-20 17:22:30.549751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.416 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.550030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.550447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.550474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.550742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.551087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.551175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.551448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.551859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.551890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.552130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.552428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.552459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.552734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.553018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.553047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.553334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.553610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.553637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.553900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.554129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.554160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.554393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.554886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.554915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.555290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.555833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.555866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.556204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.556730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.556778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.557057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.557355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.557383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.557641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.557953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.557982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.558274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.558522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.558547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.558818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.559057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.559083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.559344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.559588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.559618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.559932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.560211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.560239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.560503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.560910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.560938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.561177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.561481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.561511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.561775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.562049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.562077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.562315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.562685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.562738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.563018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.563276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.563300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.563556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.563821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.563849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.564135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.564430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.564458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.564720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.565066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.565103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.565380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.565644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.565672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.565944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.566229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.566257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.683 [2024-07-20 17:22:30.566555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.566819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.683 [2024-07-20 17:22:30.566849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.683 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.567150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.567518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.567550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.567842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.568102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.568129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.568417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.568884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.568913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.569176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.569462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.569489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.569744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.570105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.570149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.570462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.570810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.570859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.571153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.571453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.571508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.571771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.572113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.572142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.572386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.572631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.572656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.572958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.573218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.573248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.573483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.573712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.573742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.574013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.574273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.574298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.574540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.574876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.574906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.575194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.575485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.575516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.575806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.576117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.576149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.576630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.576911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.576941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.577209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.577728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.577785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.578061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.578332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.578360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.578648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.578948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.578976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.579236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.579549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.579576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.579806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.580066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.580093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.580358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.580659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.580685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.580968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.581243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.581272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.581562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.581908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.581937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.582219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.582754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.582809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.583247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.583819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.583876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.584169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.584537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.584590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.584930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.585242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.585267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.585556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.585842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.585868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.586320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.586849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.586882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.684 qpair failed and we were unable to recover it. 00:30:14.684 [2024-07-20 17:22:30.587165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.684 [2024-07-20 17:22:30.587452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.587480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.587755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.588026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.588057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.588316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.588552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.588596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.588890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.589139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.589180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.589437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.589693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.589718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.589995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.590233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.590273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.590533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.590770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.590814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.591084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.591625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.591675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.591959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.592502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.592552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.592836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.593106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.593133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.593409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.593696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.593746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.594034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.594323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.594350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.594599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.594844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.594887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.595141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.595366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.595394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.595689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.595970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.596001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.596287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.596826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.596873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.597140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.597511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.597558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.597820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.598085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.598110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.598330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.598604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.598650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.598942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.599159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.599185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.599441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.599739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.599768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.600047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.600296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.600324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.600586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.600857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.600887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.601148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.601698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.601754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.602026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.602554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.602608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.602875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.603129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.603158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.603460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.603752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.603780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.604089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.604349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.604374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.604657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.604921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.604950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.605241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.605662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.605710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.606044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.606282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.685 [2024-07-20 17:22:30.606307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.685 qpair failed and we were unable to recover it. 00:30:14.685 [2024-07-20 17:22:30.606588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.606876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.606904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.607169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.607458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.607503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.607788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.608078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.608106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.608345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.608663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.608711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.608964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.609212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.609241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.609507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.609750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.609778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.610060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.610299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.610327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.610585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.610846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.610875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.611162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.611612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.611672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.611968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.612504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.612552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.612838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.613101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.613128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.613394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.613680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.613707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.613975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.614200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.614224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.614571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.614940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.614964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.615242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.615719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.615767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.616025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.616453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.616504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.616773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.617167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.617211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.617489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.617760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.617788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.618056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.618303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.618332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.618669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.618965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.618992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.619253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.619651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.619704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.619961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.620238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.620266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.620604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.620832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.620861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.621088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.621343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.621371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.621658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.621890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.621918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.622203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.622611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.622662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.622940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.623220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.623248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.623485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.623712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.623739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.624007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.624384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.624414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.624677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.624945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.624975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.686 [2024-07-20 17:22:30.625274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.625759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.686 [2024-07-20 17:22:30.625816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.686 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.626104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.626582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.626632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.626909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.627154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.627195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.627428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.627679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.627707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.627978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.628236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.628264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.628561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.628939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.628968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.629234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.629534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.629584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.629879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.630125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.630168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.630456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.630711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.630739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.631023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.631271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.631313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.631602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.631863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.631892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.632133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.632476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.632500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.632780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.633026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.633060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.633300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.633576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.633604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.633877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.634139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.634167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.634401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.634641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.634683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.634988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.635260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.635286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.635496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.635818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.635847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.636138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.636605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.636667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.636917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.637187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.637212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.637472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.637756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.637783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.638071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.638566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.638614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.638886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.639146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.639174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.639441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.639698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.639726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.639993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.640221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.640246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.687 [2024-07-20 17:22:30.640485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.640840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.687 [2024-07-20 17:22:30.640894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.687 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.641155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.641451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.641499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.641738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.642002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.642033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.642297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.642786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.642852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.643135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.643474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.643520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.643828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.644096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.644125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.644420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.644729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.644786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.645092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.645551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.645601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.645874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.646136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.646163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.646447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.646713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.646741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.647006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.647449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.647497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.647787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.648042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.648072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.648413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.648899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.648928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.649210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.649473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.649501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.649768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.650013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.650043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.650314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.650598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.650625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.650898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.651193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.651221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.651506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.651729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.651759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.652029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.652340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.652370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.652659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.652913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.652942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.653227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.653600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.653646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.653912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.654169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.654197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.654459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.654718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.654746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.655014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.655342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.655366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.655635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.655966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.655994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.656259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.656741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.656800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.657059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.657282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.657307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.657574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.657832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.657862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.658138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.658377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.658419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.658706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.658929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.658955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.659207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.659555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.659607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.688 qpair failed and we were unable to recover it. 00:30:14.688 [2024-07-20 17:22:30.659889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.660157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.688 [2024-07-20 17:22:30.660190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.660452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.660863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.660888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.661185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.661680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.661729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.662025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.662525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.662574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.662848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.663131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.663159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.663457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.663899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.663927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.664218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.664693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.664742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.665010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.665243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.665270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.665525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.665810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.665839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.666125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.666495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.666540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.666809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.667071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.667103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.667369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.667623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.667653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.667915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.668178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.668207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.668445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.668765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.668806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.669090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.669549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.669601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.669857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.670089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.670117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.670370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.670660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.670687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.670983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.671454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.671505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.671747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.672013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.672042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.672291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.672771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.672844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.673078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.673364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.673397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.673662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.673928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.673971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.674239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.674581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.674608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.674897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.675161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.675189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.675482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.675762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.675789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.676081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.676632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.676680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.676940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.677216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.677244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.677527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.677815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.677844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.678132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.678519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.678576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.678925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.679158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.679187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.679471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.679912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.679947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.689 qpair failed and we were unable to recover it. 00:30:14.689 [2024-07-20 17:22:30.680216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.689 [2024-07-20 17:22:30.680681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.680729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.680969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.681229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.681257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.681515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.681783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.681819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.682056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.682577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.682628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.682935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.683233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.683273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.683504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.683775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.683812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.684074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.684314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.684339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.684608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.684878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.684907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.685141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.685401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.685428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.685692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.685949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.685978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.686221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.686455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.686485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.686825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.687066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.687095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.687370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.687849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.687878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.688142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.688376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.688404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.688647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.688905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.688933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.689216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.689657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.689707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.689997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.690408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.690460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.690725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.690986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.691015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.691276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.691549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.691579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.691872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.692314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.692363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.692626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.692890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.692919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.693190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.693642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.693688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.693950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.694222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.694250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.694486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.694775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.694814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.695064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.695349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.695377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.695639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.695895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.695923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.696157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.696394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.696424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.696711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.697002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.697028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.697321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.697832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.697860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.698119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.698598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.698648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.698917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.699151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.690 [2024-07-20 17:22:30.699179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.690 qpair failed and we were unable to recover it. 00:30:14.690 [2024-07-20 17:22:30.699432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.699852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.699881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.700136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.700661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.700708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.700996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.701225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.701249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.701524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.701838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.701864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.702077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.702285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.702310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.702580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.702840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.702869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.703097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.703357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.703387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.703680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.703990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.704019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.704311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.704553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.704593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.704856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.705095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.705123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.705362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.705620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.705648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.705907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.706119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.706145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.706423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.706667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.706694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.706933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.707211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.707236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.707575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.707917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.707958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.708222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.708485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.708512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.708819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.709078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.709103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.709368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.709868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.709896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.710160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.710380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.710410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.710785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.711106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.711130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.711415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.711846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.711876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.712137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.712677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.712728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.713000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.713522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.713552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.713860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.714075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.714100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.714349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.714883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.714911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.715188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.715682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.715732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.691 qpair failed and we were unable to recover it. 00:30:14.691 [2024-07-20 17:22:30.715991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.691 [2024-07-20 17:22:30.716280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.716306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.716547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.716799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.716830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.717096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.717501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.717550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.717844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.718083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.718111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.718386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.718856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.718886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.719147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.719584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.719636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.719898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.720335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.720384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.720673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.720932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.720962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.721204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.721479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.721507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.721761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.722049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.722078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.722337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.722787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.722854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.723120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.723326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.723351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.723629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.723877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.723906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.724194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.724618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.724669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.724908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.725180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.725209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.725493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.725822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.725850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.726114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.726380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.726408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.726669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.726924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.726953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.727179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.727435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.727463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.727724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.728012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.728041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.728327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.728875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.728904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.729169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.729611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.729662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.729926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.730333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.730382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.730619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.730863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.730892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.731163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.731399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.731427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.731650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.731898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.731941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.732221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.732450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.732475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.732720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.732961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.732987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.733198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.733413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.733438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.733713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.733943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.733969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.734219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.734451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.692 [2024-07-20 17:22:30.734477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.692 qpair failed and we were unable to recover it. 00:30:14.692 [2024-07-20 17:22:30.734714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.734964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.734990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.735238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.735552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.735577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.735816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.736043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.736069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.736283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.736500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.736525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.736736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.737004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.737030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.737238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.737474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.737514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.737856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.738064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.738091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.738305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.738545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.738570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.738778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.739043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.739068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.739347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.739571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.739598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.739812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.740068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.740093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.740324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.740535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.740560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.740815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.741056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.741083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.741322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.741550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.741593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.741874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.742097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.742123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.742370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.742581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.742608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.742828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.743042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.743068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.743273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.743545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.743587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.743826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.744037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.744062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.744318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.744523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.744547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.744781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.745029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.745054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.745279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.745520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.745547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.745811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.746048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.746075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.746310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.746563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.746588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.746804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.747016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.747043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.747253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.747488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.747512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.747787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.748039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.748081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.748385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.748619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.748644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.748865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.749102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.749127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.749366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.749632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.749657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.693 qpair failed and we were unable to recover it. 00:30:14.693 [2024-07-20 17:22:30.749895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.693 [2024-07-20 17:22:30.750131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.750158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.750392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.750631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.750658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.750889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.751129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.751157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.751398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.751609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.751651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.751890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.752152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.752177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.752386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.752593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.752618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.752848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.753141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.753167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.753377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.753608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.753633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.753888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.754099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.754129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.754397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.754639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.754664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.754877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.755085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.755110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.755394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.755611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.755635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.755914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.756149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.756177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.756418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.756625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.756650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.756868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.757085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.757110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.757346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.757645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.757670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.757915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.758127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.758153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.758397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.758623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.758647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.758933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.759142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.759167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.759404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.759640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.759666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.759922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.760138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.760163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.760417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.760624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.760649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.760865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.761092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.761122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.761337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.761537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.761564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.761769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.761991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.762017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.762256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.762492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.762517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.762755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.763054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.763080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.763333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.763548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.763573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.763821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.764053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.764078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.764290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.764500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.764525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.764735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.765013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.694 [2024-07-20 17:22:30.765040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.694 qpair failed and we were unable to recover it. 00:30:14.694 [2024-07-20 17:22:30.765257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.765464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.765489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.765704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.765961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.765995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.766239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.766446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.766472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.766709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.766920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.766946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.767160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.767389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.767434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.767689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.767921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.767946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.768209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.768433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.768459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.768693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.768975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.769001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.769220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.769475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.769504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.769736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.769963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.769988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.770232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.770470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.770495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.770734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.770991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.771037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.771313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.771539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.771564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.771810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.772047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.772072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.772279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.772483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.772509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.772767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.772995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.773021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.773228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.773448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.773475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.773751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.774005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.774031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.774287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.774521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.774546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.774757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.774968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.774994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.775250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.775486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.775514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.775802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.776064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.776089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.776385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.776824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.776872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.777161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.777602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.777650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.777909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.778176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.778201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.778461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.778805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.778830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.779071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.779502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.779558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.695 qpair failed and we were unable to recover it. 00:30:14.695 [2024-07-20 17:22:30.779845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.780109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.695 [2024-07-20 17:22:30.780136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.780429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.780654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.780684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.780954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.781224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.781251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.781507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.781767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.781827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.782090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.782378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.782406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.782666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.782899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.782928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.783209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.783744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.783791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.784064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.784358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.784386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.784663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.784932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.784961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.785199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.785417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.785444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.785680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.785910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.785938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.786232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.786687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.786734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.787003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.787272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.787297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.787605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.787878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.787904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.788113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.788351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.788378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.788640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.788901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.788928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.789140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.789505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.789533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.789820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.790079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.790109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.790365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.790874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.790903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.791152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.791692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.791740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.792006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.792547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.792596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.792839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.793114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.793138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.793592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.793882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.793911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.794183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.794572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.794597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.794871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.795150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.795178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.795473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.795736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.795764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.796021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.796319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.796357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.796652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.796980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.797009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.797303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.797689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.797736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.696 qpair failed and we were unable to recover it. 00:30:14.696 [2024-07-20 17:22:30.797995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.696 [2024-07-20 17:22:30.798253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.798281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.798542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.798787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.798822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.799093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.799578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.799627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.799915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.800156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.800190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.800425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.800768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.800815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.801054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.801308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.801335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.801624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.801873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.801902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.802138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.802393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.802421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.802690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.802952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.802981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.803222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.803483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.803513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.803778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.804084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.804109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.804461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.804746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.804773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.805061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.805555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.805606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.805895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.806184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.806212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.806469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.806826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.806852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.807361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.807727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.807760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.808025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.808313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.808341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.808600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.808876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.808905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.809170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.809455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.809483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.809746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.809991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.810022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.810288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.810526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.810556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.810911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.811173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.811213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.811484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.811920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.811948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.812214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.812712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.812764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.813032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.813289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.813328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.813556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.813835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.813864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.814137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.814664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.814715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.814951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.815306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.815364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.815632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.815919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.815947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.816181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.816405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.816430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.697 [2024-07-20 17:22:30.816745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.817040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.697 [2024-07-20 17:22:30.817069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.697 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.817429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.817913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.817942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.818187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.818467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.818495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.818771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.819075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.819103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.819386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.819609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.819635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.819989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.820496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.820543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.820849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.821134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.821162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.821446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.821688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.821716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.821968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.822254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.822283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.822524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.822779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.822814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.823080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.823425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.823452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.823802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.824083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.824112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.824479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.824936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.824964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.825228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.825476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.825504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.825766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.826033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.826062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.826380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.826726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.826754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.827055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.827578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.827627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.827909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.828422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.828454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.828750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.829033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.829062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.829390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.829746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.829807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.830075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.830369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.830399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.830686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.830979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.831008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.831269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.831763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.831819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.698 [2024-07-20 17:22:30.832101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.832815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.698 [2024-07-20 17:22:30.832870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.698 qpair failed and we were unable to recover it. 00:30:14.966 [2024-07-20 17:22:30.833156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.966 [2024-07-20 17:22:30.833402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.966 [2024-07-20 17:22:30.833443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.966 qpair failed and we were unable to recover it. 00:30:14.966 [2024-07-20 17:22:30.833755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.966 [2024-07-20 17:22:30.834005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.834089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.834410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.834790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.834836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.835101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.835321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.835347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.835588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.835865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.835893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.836152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.836578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.836629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.836889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.837153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.837182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.837450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.837885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.837915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.838148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.838547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.838597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.838860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.839126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.839154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.839409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.839701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.839729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.840005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.840393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.840443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.840742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.841021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.841047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.841409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.841755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.841785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.842056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.842562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.842614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.842881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.843180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.843207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.843471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.843726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.843755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.844037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.844568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.844618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.844896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.845130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.845160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.845419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.845835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.845865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.846125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.846545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.846593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.846861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.847150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.847177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.967 qpair failed and we were unable to recover it. 00:30:14.967 [2024-07-20 17:22:30.847472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.967 [2024-07-20 17:22:30.847835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.847864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.848132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.848371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.848402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.848646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.848910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.848938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.849223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.849452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.849479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.849733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.849973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.850014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.850275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.850725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.850756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.851028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.851298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.851326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.851591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.851847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.851875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.852132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.852636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.852685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.852945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.853158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.853184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.853490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.853936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.853969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.854229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.854542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.854570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.854848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.855128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.855155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.855440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.855883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.855911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.856210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.856600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.856642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.856919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.857296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.857353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.857631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.857951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.857981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.858226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.858476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.858502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.858760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.859107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.859135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.859395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.859941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.859972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.860217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.860477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.968 [2024-07-20 17:22:30.860512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.968 qpair failed and we were unable to recover it. 00:30:14.968 [2024-07-20 17:22:30.860807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.861103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.861131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.861418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.861880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.861908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.862243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.862728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.862777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.863006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.863256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.863283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.863601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.863891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.863921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.864190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.864655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.864705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.864994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.865251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.865279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.865573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.865813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.865842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.866118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.866629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.866677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.866945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.867241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.867287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.867540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.867835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.867865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.868136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.868644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.868694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.868948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.869296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.869323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.869593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.869899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.869929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.870211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.870489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.870516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.870804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.871064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.871094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.871423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.871905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.871934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.872195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.872523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.872552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.872818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.873113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.873141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.873425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.873902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.873931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.969 [2024-07-20 17:22:30.874325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.874884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.969 [2024-07-20 17:22:30.874915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.969 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.875207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.875626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.875674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.875939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.876206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.876234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.876498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.876914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.876943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.877232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.877489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.877517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.877776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.878069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.878097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.878357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.878834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.878885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.879152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.879442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.879470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.879741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.880000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.880031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.880298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.880866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.880895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.881178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.881665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.881715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.881976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.882254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.882282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.882545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.882805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.882835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.883130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.883417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.883441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.883905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.884190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.884218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.884497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.884724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.884748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.884975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.885403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.885453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.885763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.886034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.886065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.886327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.886839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.886868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.887156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.887388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.887418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.887882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.888121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.888160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-07-20 17:22:30.888431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.888876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.970 [2024-07-20 17:22:30.888900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.889176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.889459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.889483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.889767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.890040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.890069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.890345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.890824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.890876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.891156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.891671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.891722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.892013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.892225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.892250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.892531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.892825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.892854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.893111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.893483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.893532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.893877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.894163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.894190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.894487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.894748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.894777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.895020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.895303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.895331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.895615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.895909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.895938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.896170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.896638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.896687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.896981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.897512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.897560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.897823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.898111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.898139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.898405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.898856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.898885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.899144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.899672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.899722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.900011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.900380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.900437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.900723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.900987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.901015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.901324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.901881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.901910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.902194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.902647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.902697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-07-20 17:22:30.902970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.971 [2024-07-20 17:22:30.903410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.903457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.903716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.904015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.904044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.904378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.904632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.904671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.904974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.905202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.905230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.905495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.905753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.905781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.906061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.906298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.906327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.906609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.906884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.906914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.907182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.907472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.907500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.907777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.907994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.908020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.908281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.908720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.908772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.909046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.909282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.909307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.909601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.909841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.909870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.910132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.910616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.910664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.910961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.911230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.911258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.911553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.911833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.911862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.912152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.912611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.912658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.912933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.913221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.913250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.913493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.913781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.913818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.914108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.914581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.914632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.914925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.915178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.915203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.915590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.915886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.972 [2024-07-20 17:22:30.915914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.972 qpair failed and we were unable to recover it. 00:30:14.972 [2024-07-20 17:22:30.916202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.916654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.916702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.917004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.917472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.917521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.917807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.918099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.918128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.918395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.918704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.918727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.918999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.919281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.919307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.919564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.919833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.919861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.920094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.920356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.920384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.920679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.920946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.920976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.921238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.921659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.921709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.922036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.922584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.922633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.922918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.923155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.923182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.923474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.923928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.923956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.924244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.924771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.924838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.925094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.925381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.925406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.925748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.926110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.926156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.973 qpair failed and we were unable to recover it. 00:30:14.973 [2024-07-20 17:22:30.926454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.973 [2024-07-20 17:22:30.926875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.926905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.927191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.927523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.927553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.927852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.928139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.928167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.928463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.928926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.928955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.929216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.929697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.929747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.930040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.930489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.930538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.930841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.931146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.931174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.931400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.931693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.931717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.931991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.932413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.932461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.932757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.933036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.933067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.933323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.933863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.933892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.934160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.934492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.934520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.934810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.935081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.935110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.935375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.935604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.935632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.936020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.936483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.936531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.936822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.937077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.937105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.937368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.937855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.937884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.938175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.938436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.938464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.938748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.939031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.939060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.939324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.939840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.939869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.940116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.940376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.940403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.974 qpair failed and we were unable to recover it. 00:30:14.974 [2024-07-20 17:22:30.940695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.974 [2024-07-20 17:22:30.940979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.941007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.941258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.941832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.941878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.942148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.942435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.942463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.942751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.943019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.943048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.943285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.943543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.943570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.943828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.944072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.944103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.944352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.944612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.944642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.944890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.945152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.945180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.945439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.945881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.945910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.946189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.946729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.946777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.947074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.947367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.947392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.947685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.947952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.947981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.948244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.948719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.948769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.949006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.949495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.949545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.949806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.950106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.950131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.950406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.950893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.950922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.951290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.951575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.951609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.951959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.952224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.952252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.952519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.952770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.952809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.953079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.953342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.953370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.953657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.953946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.953972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.975 qpair failed and we were unable to recover it. 00:30:14.975 [2024-07-20 17:22:30.954231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.975 [2024-07-20 17:22:30.954756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.954817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.955084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.955528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.955578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.955855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.956188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.956246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.956509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.956748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.956775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.957036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.957472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.957521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.957802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.958045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.958075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.958341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.958603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.958631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.958892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.959150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.959179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.959437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.959790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.959821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.960065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.960552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.960600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.960870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.961118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.961150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.961440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.961886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.961914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.962195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.962541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.962565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.962923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.963185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.963213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.963471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.963896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.963924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.964212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.964753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.964808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.965050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.965485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.965532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.965922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.966179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.966209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.966494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.966905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.966935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.967195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.967679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.967727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.968000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.968404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.976 [2024-07-20 17:22:30.968459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.976 qpair failed and we were unable to recover it. 00:30:14.976 [2024-07-20 17:22:30.968747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.969040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.969070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.969327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.969823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.969880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.970156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.970443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.970471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.970761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.971026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.971055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.971339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.971846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.971875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.972150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.972430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.972458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.972720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.973011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.973037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.973395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.973886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.973914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.974199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.974745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.974802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.975071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.975309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.975341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.975870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.976116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.976146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.976509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.976814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.976844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.977109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.977654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.977705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.977990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.978477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.978525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.978765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.979021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.979050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.979393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.979809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.979838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.980101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.980635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.980684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.980976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.981493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.981541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.981816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.982038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.982063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.982363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.982849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.982878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.977 [2024-07-20 17:22:30.983150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.983386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.977 [2024-07-20 17:22:30.983412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.977 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.983689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.983912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.983939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.984159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.984397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.984422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.984684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.984964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.984989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.985192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.985431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.985460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.985728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.985941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.985966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.986174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.986427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.986452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.986685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.986941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.986968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.987182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.987378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.987419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.987712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.987935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.987961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.988230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.988442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.988467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.988760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.988998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.989024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.989282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.989547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.989572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.989836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.990077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.990103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.990318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.990553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.990578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.990820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.991058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.991083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.991345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.991545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.991572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.991832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.992110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.992135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.992372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.992612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.992639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.992896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.993110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.993136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.993345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.993550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.993575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.978 [2024-07-20 17:22:30.993813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.994037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.978 [2024-07-20 17:22:30.994062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.978 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.994344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.994589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.994619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.994854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.995062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.995089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.995361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.995598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.995638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.995857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.996095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.996141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.996411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.996675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.996700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.996944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.997149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.997174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.997447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.997715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.997740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.997972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.998193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.998218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.998436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.998699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.998728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.999050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.999260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.999287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:30.999528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.999749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:30.999777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:31.000052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.000287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.000312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:31.000522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.000753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.000780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:31.001036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.001335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.001394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:31.001653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.001892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.001919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:31.002298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.002588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.979 [2024-07-20 17:22:31.002614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.979 qpair failed and we were unable to recover it. 00:30:14.979 [2024-07-20 17:22:31.002855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.003059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.003085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.003319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.003530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.003556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.003810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.004092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.004118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.004367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.004596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.004622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.004869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.005107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.005133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.005353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.005590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.005616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.005822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.006053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.006079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.006347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.006598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.006624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.006837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.007052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.007078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.007340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.007612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.007673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.007942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.008187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.008213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.008486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.008768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.008816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.009092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.009326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.009352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.009619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.009862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.009888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.010107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.010436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.010462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.010722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.010957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.010986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.011251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.011465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.011491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.011745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.012016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.012043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.012282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.012516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.012543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.012776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.013071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.013100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.013360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.013625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.980 [2024-07-20 17:22:31.013651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.980 qpair failed and we were unable to recover it. 00:30:14.980 [2024-07-20 17:22:31.013893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.014214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.014246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.014536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.014802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.014828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.015041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.015305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.015330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.015596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.015820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.015847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.016088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.016352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.016377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.016636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.016923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.016950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.017209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.017421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.017448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.017695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.017941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.017968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.018199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.018469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.018494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.018719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.018951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.018978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.019250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.019486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.019511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.019750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.020074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.020101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.020373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.020673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.020736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.020978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.021242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.021271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.021504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.021744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.021771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.022035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.022253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.022279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.022519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.022771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.022809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.023119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.023364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.023394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.023681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.023923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.023949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.024196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.024444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.024470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.981 [2024-07-20 17:22:31.024717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.024954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.981 [2024-07-20 17:22:31.024982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.981 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.025272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.025684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.025736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.026020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.026280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.026311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.026602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.026894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.026923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.027182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.027481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.027509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.027756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.028006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.028038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.028303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.028564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.028593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.028858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.029128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.029157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.029423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.029877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.029906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.030165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.030644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.030693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.030949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.031241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.031266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.031521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.031775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.031810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.032113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.032678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.032726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.032970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.033262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.033292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.033634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.033923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.033953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.034202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.034494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.034525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.034774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.035079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.035105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.035394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.035895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.035924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.036186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.036714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.036766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.037040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.037325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.037351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.037613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.037921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.037951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.982 qpair failed and we were unable to recover it. 00:30:14.982 [2024-07-20 17:22:31.038238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.038534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.982 [2024-07-20 17:22:31.038563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.038850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.039083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.039112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.039408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.039704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.039732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.040003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.040426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.040471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.040728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.041014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.041044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.041350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.041823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.041877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.042113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.042397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.042426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.042695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.042931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.042973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.043268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.043745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.043804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.044068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.044350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.044380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.044638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.044877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.044907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.045158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.045501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.045556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.045841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.046080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.046109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.046646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.046976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.047002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.047275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.047856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.047885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.048170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.048467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.048496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.048763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.049000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.049032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.049294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.049643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.049671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.049951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.050208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.050248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.050515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.050814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.050844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.051125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.051343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.051377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.983 qpair failed and we were unable to recover it. 00:30:14.983 [2024-07-20 17:22:31.051665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.983 [2024-07-20 17:22:31.051950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.051979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.052265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.052634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.052658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.053021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.053267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.053307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.053685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.053975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.054006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.054254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.054729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.054780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.055081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.055381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.055410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.055737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.056079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.056109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.056344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.056747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.056812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.057071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.057331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.057356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.057635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.057974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.058009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.058310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.058559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.058590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.058890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.059119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.059145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.059506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.059807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.059837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.060126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.060426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.060456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.060724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.060981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.061010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.061276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.061825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.061877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.062116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.062379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.062407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.062672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.062926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.062955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.063241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.063690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.063740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.064003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.064456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.064512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.064941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.065205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.065234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.984 qpair failed and we were unable to recover it. 00:30:14.984 [2024-07-20 17:22:31.065496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.984 [2024-07-20 17:22:31.065800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.065829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.066191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.066755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.066830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.067125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.067636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.067668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.067946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.068191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.068221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.068509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.068808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.068838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.069130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.069676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.069726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.069996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.070358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.070424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.070776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.071082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.071115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.071380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.071745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.071774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.072079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.072642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.072692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.072958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.073224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.073255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.073731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.074031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.074061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.074352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.074855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.074885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.075188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.075673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.075724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.075990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.076250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.076275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.076534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.076903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.076933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.077195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.077451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.985 [2024-07-20 17:22:31.077480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.985 qpair failed and we were unable to recover it. 00:30:14.985 [2024-07-20 17:22:31.077729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.078009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.078038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.078324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.078586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.078614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.078886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.079152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.079181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.079474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.079928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.079957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.080280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.080759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.080822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.081086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.081626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.081674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.081960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.082219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.082247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.082480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.082743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.082771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.083058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.083534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.083582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.083850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.084115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.084145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.084606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.084915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.084945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.085189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.085452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.085480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.085761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.086016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.086047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.086340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.086569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.086597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.086885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.087193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.087233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.087528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.087821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.087850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.088111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.088487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.088516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.088805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.089091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.089120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.089656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.089995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.090024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.090309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.090602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.090631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.090988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.091244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.986 [2024-07-20 17:22:31.091273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.986 qpair failed and we were unable to recover it. 00:30:14.986 [2024-07-20 17:22:31.091535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.091818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.091848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.092145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.092694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.092746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.093054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.093325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.093357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.093610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.093885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.093914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.094176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.094405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.094435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.094691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.094961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.094991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.095283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.095848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.095878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.096214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.096686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.096735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.096996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.097276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.097306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.097569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.097856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.097887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.098186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.098744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.098801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.099102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.099649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.099699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.099962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.100444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.100494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.100950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.101185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.101216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.101507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.101849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.101879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.102199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.102735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.102785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.103087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.103635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.103688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.103954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.104217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.104247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.104520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.104810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.104840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.105064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.105355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.105384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.987 [2024-07-20 17:22:31.105665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.105972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.987 [2024-07-20 17:22:31.106002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.987 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.106302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.106720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.106752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.107053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.107404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.107435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.107687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.107955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.107985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.108273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.108746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.108805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.109066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.109547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.109598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.109857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.110118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.110147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.110452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.110972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.111003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.111391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.111859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.111889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.112193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.112723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.112773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.113121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.113399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.113432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.113817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.114123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.114155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.114417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.114677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.114706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:14.988 qpair failed and we were unable to recover it. 00:30:14.988 [2024-07-20 17:22:31.115177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.988 [2024-07-20 17:22:31.115724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.115775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.116071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.116320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.116350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.116612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.116850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.116885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.117132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.117454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.117484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.117814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.118083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.118127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.118422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.118754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.118878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.119140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.119637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.119688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.119925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.120436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.120488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.120760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.121054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.121087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.121361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.121623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.121656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.121898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.122142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.122199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.122462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.122672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.122699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.123046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.123456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.123508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.123926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.124211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.124242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.124497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.124767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.124802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.125107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.125526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.125580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.125825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.126311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.126364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.126626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.126985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.127016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.127289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.127563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.127595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.127949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.128413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.128464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.128731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.128996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.129024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.129349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.129585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.129616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.129877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.130146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.130178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.267 qpair failed and we were unable to recover it. 00:30:15.267 [2024-07-20 17:22:31.130441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.130674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.267 [2024-07-20 17:22:31.130769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.131045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.131471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.131522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.131816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.132079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.132109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.132401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.132728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.132755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.133000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.133263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.133291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.133550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.133817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.133845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.134129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.134622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.134677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.134945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.135153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.135180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.135475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.135705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.135731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.136033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.136355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.136384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.136646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.136963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.136993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.137254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.137574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.137603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.137862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.138104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.138135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.138446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.138739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.138769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.139062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.139489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.139536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.139803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.140080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.140110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.140370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.140629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.140661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.140926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.141152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.141184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.141473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.141893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.141924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.142208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.142479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.142505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.142819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.143056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.143085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.143585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.143894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.143924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.144185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.144449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.144480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.144782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.145061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.145091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.145362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.145857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.145889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.146157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.146448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.146478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.146735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.147022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.147052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.147524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.147851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.147881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.148214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.148747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.148778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.149261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.149731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.149777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.150063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.150556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.150605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.150884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.151175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.151205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.151496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.151910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.151940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.152228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.152751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.152813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.153090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.153568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.153617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.153870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.154114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.154159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.154405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.154754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.154779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.155080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.155480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.155519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.155802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.156057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.156087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.156418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.156752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.156783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.157064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.157381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.157431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.157767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.158058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.268 [2024-07-20 17:22:31.158088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.268 qpair failed and we were unable to recover it. 00:30:15.268 [2024-07-20 17:22:31.158348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.158855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.158886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.159321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.159689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.159717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.160022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.160281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.160310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.160536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.160823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.160859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.161107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.161395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.161426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.161675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.161976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.162006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.162277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.162825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.162879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.163137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.163679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.163731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.163996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.164255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.164285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.164602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.164887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.164917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.165192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.165548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.165574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.165835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.166104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.166134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.166397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.166649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.166679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.167031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.167363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.167417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.167716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.168012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.168042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.168509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.168818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.168849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.169140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.169372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.169402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.169698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.169971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.170001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.170229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.170706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.170762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.171014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.171323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.171371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.171659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.171895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.171928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.172215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.172464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.172495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.172800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.173100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.173130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.173629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.174009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.174044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.174339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.174878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.174908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.175187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.175464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.175494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.175764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.176052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.176084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.176542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.176832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.176863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.177732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.178028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.178059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.178324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.178594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.178623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.178893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.179170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.179196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.179463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.179701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.179733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.180079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.180382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.180411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.180842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.181076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.181116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.181416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.181875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.181905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.182192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.182526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.182585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.182914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.183205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.183235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.183501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.183804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.183852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.184126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.184564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.184614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.184884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.185123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.185152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.185434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.185653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.185678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.269 qpair failed and we were unable to recover it. 00:30:15.269 [2024-07-20 17:22:31.185967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.269 [2024-07-20 17:22:31.186268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.186315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.186688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.186971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.186997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.187371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.187696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.187721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.188008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.188492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.188545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.188810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.189080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.189109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.189475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.189865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.189895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.190188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.190488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.190513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.190900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.191274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.191321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.191631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.191925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.191956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.192252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.192730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.192780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.193037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.193311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.193341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.193657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.193950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.193982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.194282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.194733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.194784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.195063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.195351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.195381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.195611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.195954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.195984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.196266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.196700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.196751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.197056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.197564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.197616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.197902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.198167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.198197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.198487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.198918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.198947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.199404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.199912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.199946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.200253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.200618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.200680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.200972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.201246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.201275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.201533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.201819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.201854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.202154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.202518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.202566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.202830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.203125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.203155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.203526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.203808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.203850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.204110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.204443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.204490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.204752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.205061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.205091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.205388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.205891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.205930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.206198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.206736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.206785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.207098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.207632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.207683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.208007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.208342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.208372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.208662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.208953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.208983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.209521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.209809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.209849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.210160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.210441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.210510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.270 qpair failed and we were unable to recover it. 00:30:15.270 [2024-07-20 17:22:31.210886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.270 [2024-07-20 17:22:31.211155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.211185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.211453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.211751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.211822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.212115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.212593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.212646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.212934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.213198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.213230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.213482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.213919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.213949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.214221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.214743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.214802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.215094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.215557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.215608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.215902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.216137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.216166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.216527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.216940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.216970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.217236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.217703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.217755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.218064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.218547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.218598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.218896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.219158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.219187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.219529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.219858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.219888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.220177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.220468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.220497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.220906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.221196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.221226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.221514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.221918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.221947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.222229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.222529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.222589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.222886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.223155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.223184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.223487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.223820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.223851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.224164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.224501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.224526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.224813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.225115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.225144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.225414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.225671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.225702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.225962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.226322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.226347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.226755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.227033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.227067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.227422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.227894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.227924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.228235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.228771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.228841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.229107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.229354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.229380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.229689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.229998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.230027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.230341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.230871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.230901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.231178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.231647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.231700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.231963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.232228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.232255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.232498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.232778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.232820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.233042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.233250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.233276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.233485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.233751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.233781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.234031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.234494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.234520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.234760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.235004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.235032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.235271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.235490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.235533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.235808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.236023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.236051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.236504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.236791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.236824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.237046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.237286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.237329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.237628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.237917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.237946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.238260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.238508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.238534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.271 qpair failed and we were unable to recover it. 00:30:15.271 [2024-07-20 17:22:31.238751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.271 [2024-07-20 17:22:31.238998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.239025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.239265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.239531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.239557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.239885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.240172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.240199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.240457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.240739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.240766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.241033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.241514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.241566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.241839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.242076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.242102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.242362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.242755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.242821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.243056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.243345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.243372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.243607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.243891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.243921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.244184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.244464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.244490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.244722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.244963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.244991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.245224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.245489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.245515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.245753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.246008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.246036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.246315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.246665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.246713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.246971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.247208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.247250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.247537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.247776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.247811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.248027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.248530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.248581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.248890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.249102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.249129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.249351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.249584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.249611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.249819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.250057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.250084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.250345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.250706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.250732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.250953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.251196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.251228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.251497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.251832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.251875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.252145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.252366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.252393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.252615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.252854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.252882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.253102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.253338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.253364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.253579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.253835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.253864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.254081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.254318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.254345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.254585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.254851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.254878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.255122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.255403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.255431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.255869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.256108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.256139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.256396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.256609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.256638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.256885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.257108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.257137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.257378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.257606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.257636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.257927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.258215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.258242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.258510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.258747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.258776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.259052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.259297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.259328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.259615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.259884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.259913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.260155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.260370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.260399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.260642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.260901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.260930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.261154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.261413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.261443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.261712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.261928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.261956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.262187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.262405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.272 [2024-07-20 17:22:31.262432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.272 qpair failed and we were unable to recover it. 00:30:15.272 [2024-07-20 17:22:31.262687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.262951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.262982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.263241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.263476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.263503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.263783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.264047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.264074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.264334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.264583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.264616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.264909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.265157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.265185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.265425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.265731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.265758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.265974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.266193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.266220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.266461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.266765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.266826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.267099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.267307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.267335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.267598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.267872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.267900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.268171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.268420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.268446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.268690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.268972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.269000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.269313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.269529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.269557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.269832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.270067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.270099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.270348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.270761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.270821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.271110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.271419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.271474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.271737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.271982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.272009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.272223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.272642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.272699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.272991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.273228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.273255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.273500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.273706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.273753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.274022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.274252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.274281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.274511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.274772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.274804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.275017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.275318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.275359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.275601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.275846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.275878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.276121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.276373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.276402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.276637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.276851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.276879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.277098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.277315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.277344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.277555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.277789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.277824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.278106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.278412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.278439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.278714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.279054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.279085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.279355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.279845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.279874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.280139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.280434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.280463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.280705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.280991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.281019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.281258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.281555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.281584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.281877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.282166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.282195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.282480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.282914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.282944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.283210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.283504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.283533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.273 qpair failed and we were unable to recover it. 00:30:15.273 [2024-07-20 17:22:31.283800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.273 [2024-07-20 17:22:31.284040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.284073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.284364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.284851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.284882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.285142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.285408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.285437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.285726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.286017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.286047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.286601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.286887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.286918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.287156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.287410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.287439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.287705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.287979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.288009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.288259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.288721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.288772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.289052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.289544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.289592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.289854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.290092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.290122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.290408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.290870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.290900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.291142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.291655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.291706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.291968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.292259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.292322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.292610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.292868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.292900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.293247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.293834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.293864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.294132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.294498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.294522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.294808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.295093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.295123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.295394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.295684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.295714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.296019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.296286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.296316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.296573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.296836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.296866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.297128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.297628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.297677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.298015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.298302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.298329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.298728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.299038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.299070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.299333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.299841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.299870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.300146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.300478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.300533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.300809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.301070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.301100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.301383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.301732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.301757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.302136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.302697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.302750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.303036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.303579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.303634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.303897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.304161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.304191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.304458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.304906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.304936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.305222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.305765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.305825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.306079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.306619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.306670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.306937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.307203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.307235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.307672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.307992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.308023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.308290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.308779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.308834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.309134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.309679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.309726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.310006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.310265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.310296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.310566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.310890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.310921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.311190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.311455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.311487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.311747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.312050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.312081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.312346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.312843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.312874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.313177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.313713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.274 [2024-07-20 17:22:31.313766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.274 qpair failed and we were unable to recover it. 00:30:15.274 [2024-07-20 17:22:31.314043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.314327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.314357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.314629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.314916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.314946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.315236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.315705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.315755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.316011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.316520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.316573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.316847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.317145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.317174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.317607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.317917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.317947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.318232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.318461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.318490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.318785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.319066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.319095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.319400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.319882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.319912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.320179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.320426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.320455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.320724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.321015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.321045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.321299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.321679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.321736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.322037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.322355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.322382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.322682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.322972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.322999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.323282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.323763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.323824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.324142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.324564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.324615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.324901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.325139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.325164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.325446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.325912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.325941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.326207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.326692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.326741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.327014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.327362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.327415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.327696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.327998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.328028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.328532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.328943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.328973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.329262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.329808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.329858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.330159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.330639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.330689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.330985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.331243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.331273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.331771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.332063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.332092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.332362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.332596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.332627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.333015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.333415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.333467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.333752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.334023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.334053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.334563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.334856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.334888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.335162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.335467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.335493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.335765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.336054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.336083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.336576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.336867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.336898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.337140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.337426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.337456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.337744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.338017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.338047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.338446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.338782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.338825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.339133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.339532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.339581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.339884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.340139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.340179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.340478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.340895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.340926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.341338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.341877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.341911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.342205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.342748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.342807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.343051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.343575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.343626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.275 [2024-07-20 17:22:31.343920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.344222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.275 [2024-07-20 17:22:31.344252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.275 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.344513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.344750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.344804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.345049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.345313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.345343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.345781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.346098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.346129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.346422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.346907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.346937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.347219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.347550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.347582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.347874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.348267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.348323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.348809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.349065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.349171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.349478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.349780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.349835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.350096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.350410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.350461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.350841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.351073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.351120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.351424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.351756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.351786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.352063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.352322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.352349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.352600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.352873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.352905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.353196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.353554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.353602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.353840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.354132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.354164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.354458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.354678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.354708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.355015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.355525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.355577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.355840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.356106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.356136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.356511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.356836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.356862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.357346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.357870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.357904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.358230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.358771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.358850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.359145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.359663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.359714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.360014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.360546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.360597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.360868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.361115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.361140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.361425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.361760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.361789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.362090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.362576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.362625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.362862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.363126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.363155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.363407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.363698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.363728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.364115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.364510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.364562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.364858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.365154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.365184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.365722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.366039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.366069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.366337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.366625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.366662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.366970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.367362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.367413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.367707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.367970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.368001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.368516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.368808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.368839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.369136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.369648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.369696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.276 [2024-07-20 17:22:31.370065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.370316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.276 [2024-07-20 17:22:31.370342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.276 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.370597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.370890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.370932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.371178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.371497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.371526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.371812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.372092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.372122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.372428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.372900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.372925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.373275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.373844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.373883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.374164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.374635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.374687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.374951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.375236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.375266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.375544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.375838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.375869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.376134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.376418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.376447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.376887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.377143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.377172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.377464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.377749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.377779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.378050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.378550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.378600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.378869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.379130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.379161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.379464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.379778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.379817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.380084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.380486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.380516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.380813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.381166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.381197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.381485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.381719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.381749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.382052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.382464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.382490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.382784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.383080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.383109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.383376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.383700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.383730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.383964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.384263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.384322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.384841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.385131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.385161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.385460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.385882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.385911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.386129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.386573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.386628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.386921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.387187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.387223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.387551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.387831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.387858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.388238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.388831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.388885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.389194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.389716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.389767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.390071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.390431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.390472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.390842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.391110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.391140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.391407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.391849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.391880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.392246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.392762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.392834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.393100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.393628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.393680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.393950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.394215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.394245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.394520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.394784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.394832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.395127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.395405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.395431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.395679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.395972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.396003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.396453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.396892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.396922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.397184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.397659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.397710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.398035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.398551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.398602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.398895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.399159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.399189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.399674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.399950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.399980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.277 qpair failed and we were unable to recover it. 00:30:15.277 [2024-07-20 17:22:31.400242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.277 [2024-07-20 17:22:31.400707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.400756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.401032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.401654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.401705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.401969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.402230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.402260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.402823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.403106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.403138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.403416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.403877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.403908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.404165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.404591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.404641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.404922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.405181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.405221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.405659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.405972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.405999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.406258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.406585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.406616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.406875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.407393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.407442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.407715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.407982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.408010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.408442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.408931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.408962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.409223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.409559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.409589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.409852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.410323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.410369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.410682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.410991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.411018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.411482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.411940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.411970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.412258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.412771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.412995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.413264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.413826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.413859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.414147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.414643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.414759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.415242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.415552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.415582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.415825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.416085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.416115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.416369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.416627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.278 [2024-07-20 17:22:31.416840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.278 qpair failed and we were unable to recover it. 00:30:15.278 [2024-07-20 17:22:31.417106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.417350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.417382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.417840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.418141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.418170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.418371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.418703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.418813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.419082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.419527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.419578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.419872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.420248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.420304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.420769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.421066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.421095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.421354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.421615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.421656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.421989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.422223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.422254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.422512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.422768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.422805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.423070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.423541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.423594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.423885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.424149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.424180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.424520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.424825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.424856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.425125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.425398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.425425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.425747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.426047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.426077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.426343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.426600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.426629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.426932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.427209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.427238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.427538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.427825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.427856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.428121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.428669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.428720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.429011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.429361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.429387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.429720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.430017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.430047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.430320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.430566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.430607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.430901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.431193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.431222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.431494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.431755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.431786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.432062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.432357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.432386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.432674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.432936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.432967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.433209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.433489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.433518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.433815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.434082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.434114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.434445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.434909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.434939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.435229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.435490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.435519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.435944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.436235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.436265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.436550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.436840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.436871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.437139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.437495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.437550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.437807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.438175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.438206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.438764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.439127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.439181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.439482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.439834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.439874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.440161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.440449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.440478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.440725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.440986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.441016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.441495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.441806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.441836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.442088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.442541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.442592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.442860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.443130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.443159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.443423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.443667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.443696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.444001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.444238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.444269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.444546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.444846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.444876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.445139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.445493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.445522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.445782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.446078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.446108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.446367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.446643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.446669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.447003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.447519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.447570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.561 [2024-07-20 17:22:31.447826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.448133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.561 [2024-07-20 17:22:31.448163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.561 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.448450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.448897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.448927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.449188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.449534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.449559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.449791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.450071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.450099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.450433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.450899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.450928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.451211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.451693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.451744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.452019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.452356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.452385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.452614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.452880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.452912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.453193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.453457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.453488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.453779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.454075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.454104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.454591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.454850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.454881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.455146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.455573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.455620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.455919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.456199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.456228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.456509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.456765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.456811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.457124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.457496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.457545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.457811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.458098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.458127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.458447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.458924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.458953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.459221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.459558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.459583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.459853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.460080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.460111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.460394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.460663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.460691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.461038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.461587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.461636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.461900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.462187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.462215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.462731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.463060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.463090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.463376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.463848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.463878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.464109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.464582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.464632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.464897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.465186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.465214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.465720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.466028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.466058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.466350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.466595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.466637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.466922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.467197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.467226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.467501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.467782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.467819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.468084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.468621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.468670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.468964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.469399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.469450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.469708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.470014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.470047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.470280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.470818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.470880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.471145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.471665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.471716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.472004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.472271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.472299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.472585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.472860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.472892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.473151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.473616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.473667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.473962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.474511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.474563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.474853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.475105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.475134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.475439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.475782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.475826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.476128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.476659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.476709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.477004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.477333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.477358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.477637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.477912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.477942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.478177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.478552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.478612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.478904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.479166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.479195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.479674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.479994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.480024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.480319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.480641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.480690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.481013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.481384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.481432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.562 qpair failed and we were unable to recover it. 00:30:15.562 [2024-07-20 17:22:31.481880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.562 [2024-07-20 17:22:31.482107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.482138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.482419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.482902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.482932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.483208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.483595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.483621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.483864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.484106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.484133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.484418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.484709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.484736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.485010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.485473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.485518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.485813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.486135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.486162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.486376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.486611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.486639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.486873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.487084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.487112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.487318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.487530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.487556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.487866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.488077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.488119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.488373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.488581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.488610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.488851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.489084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.489112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.489355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.489601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.489651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.489942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.490167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.490194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.490475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.490686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.490717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.490944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.491160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.491188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.491457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.491697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.491724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.491960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.492197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.492225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.492515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.492716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.492742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.492996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.493317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.493344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.493649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.493932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.493962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.494250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.494510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.494538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.494774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.495035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.495062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.495323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.495594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.495621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.495886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.496166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.496194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.496439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.496726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.496755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.497027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.497240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.497267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.497549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.497820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.497846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.498077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.498284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.498310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.498582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.498866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.498892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.499134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.499361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.499388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.499598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.499850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.499881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.500119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.500417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.500445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.500712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.500980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.501008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.501246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.501453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.501480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.501713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.502013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.502040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.502260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.502525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.502553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.502803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.503016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.503043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.503253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.503483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.503510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.503823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.504105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.504132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.563 [2024-07-20 17:22:31.504361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.504639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.563 [2024-07-20 17:22:31.504690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.563 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.504995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.505238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.505268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.505523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.505764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.505791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.506062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.506295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.506322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.506562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.506778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.506811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.507093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.507401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.507428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.507639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.507858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.507884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.508169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.508438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.508468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.508693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.508939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.508967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.509256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.509704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.509729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.509944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.510151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.510178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.510437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.510732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.510759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.511028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.511316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.511343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.511579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.511872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.511898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.512142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.512404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.512431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.512648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.512925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.512970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.513180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.513449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.513479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.513751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.514005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.514032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.514260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.514499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.514528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.514753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.515005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.515032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.515434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.515692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.515719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.515972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.516214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.516241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.516483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.516751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.516810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.517084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.517294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.517323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.517557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.517776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.517813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.518069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.518355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.518387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.518652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.518917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.518944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.519236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.519533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.519560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.519811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.520101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.520129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.520375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.520616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.520643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.520911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.521153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.521179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.521399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.521619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.521645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.521916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.522132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.522160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.522377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.522578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.522605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.522913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.523129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.523156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.523402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.523650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.523677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.523893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.524148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.524216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.524483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.524780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.524818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.525077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.525286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.525312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.525571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.525779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.525822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.526083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.526617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.526668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.526930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.527172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.527216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.527513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.527784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.527820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.528083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.528548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.528598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.528891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.529091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.529117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.529509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.529960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.529989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.530254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.530541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.530589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.530858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.531129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.531159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.531688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.532001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.532031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.564 [2024-07-20 17:22:31.532297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.532576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.564 [2024-07-20 17:22:31.532604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.564 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.532952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.533219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.533249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.533537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.533801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.533831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.534088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.534632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.534685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.534976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.535241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.535272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.535578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.535886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.535916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.536181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.536711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.536764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.537039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.537561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.537610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.537873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.538129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.538160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.538446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.538914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.538945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.539230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.539557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.539586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.539847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.540095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.540124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.540408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.540882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.540912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.541185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.541694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.541746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.541981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.542272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.542301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.542780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.543057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.543086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.543378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.543863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.543893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.544166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.544677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.544728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.545022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.545402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.545452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.545886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.546162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.546190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.546448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.546705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.546734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.547012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.547293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.547317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.547657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.547947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.547977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.548418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.548927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.548958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.549222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.549493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.549521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.549798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.550070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.550099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.550390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.550767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.550828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.551090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.551402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.551447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.551720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.551956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.552002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.552407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.552761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.552816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.553085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.553297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.553322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.553749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.554065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.554092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.554335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.554595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.554624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.554921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.555157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.555200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.555514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.555742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.555771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.556041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.556383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.556430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.556696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.556937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.556968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.557223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.557501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.557548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.565 [2024-07-20 17:22:31.557840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.558121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.565 [2024-07-20 17:22:31.558150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.565 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.558462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.558801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.558827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.559075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.559354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.559400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.559672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.559914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.559943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.560199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.560487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.560517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.560778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.561027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.561057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.561354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.561782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.561874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.562143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.562699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.562732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.563030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.563273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.563305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.563607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.563890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.563922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.564164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.564448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.564476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.564788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.565073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.565102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.565391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.565811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.565865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.566105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.566360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.566387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.566633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.566896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.566927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.567205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.567536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.567596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.567870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.568131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.568160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.568695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.569009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.569039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.569329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.569619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.569653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.569923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.570143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.570169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.570436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.570911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.570941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.571183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.571473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.571501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.571800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.572073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.572103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.572383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.572672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.572717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.572993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.573256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.573286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.573536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.573767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.573816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.574133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.574551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.574605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.574951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.575177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.575208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.575498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.575766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.575823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.576088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.576379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.576409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.576677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.576972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.577002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.577429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.577917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.577946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.578181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.578443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.578472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.578939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.579206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.579237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.579533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.579840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.579881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.580182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.580687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.580737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.581008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.581232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.581263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.581815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.582120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.582150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.582435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.582695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.582730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.582978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.583197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.583223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.583507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.583768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.583806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.584045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.584336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.584364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.584663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.584918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.584947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.585169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.585411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.585453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.585743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.586008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.586037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.586377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.586629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.586658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.586917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.587154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.587195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.587568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.587859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.587889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.588148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.588672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.588726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.588996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.589266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.589295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.589559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.589855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.589885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.590146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.590451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.590498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.590860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.591117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.591147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.591448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.591859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.591889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.592150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.592386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.592428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.592711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.592975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.593004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.593312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.593548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.593579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.593848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.594084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.594114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.594376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.594741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.594767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.595043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.595314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.595342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.595582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.595828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.595858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.596068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.596323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.596353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.596574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.596849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.596881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.597122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.597387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.597419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.597656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.597905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.566 [2024-07-20 17:22:31.597933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.566 qpair failed and we were unable to recover it. 00:30:15.566 [2024-07-20 17:22:31.598168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.598409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.598439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.598699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.598963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.598990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.599240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.599539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.599571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.599881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.600096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.600123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.600375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.600588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.600615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.600837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.601098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.601129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.601466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.601725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.601756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.602003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.602273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.602303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.602567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.602809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.602839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.603094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.603298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.603324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.603576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.603816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.603854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.604077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.604361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.604393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.604683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.604955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.604983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.605232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.605525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.605555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.605835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.606091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.606147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.606392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.606732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.606784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.607093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.607390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.607420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.607681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.607920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.607947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.608197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.608490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.608521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.608818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.609053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.609096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.609363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.609620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.609648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.609919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.610181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.610210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.610629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.611007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.611034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.611276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.611500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.611528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.611769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.612043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.612069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.612401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.612765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.612791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.613027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.613326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.613354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.613644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.613915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.613941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.614210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.614508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.614552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.614865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.615074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.615099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.615341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.615579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.615620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.615916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.616142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.616170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.616449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.616731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.616756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.616997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.617233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.617261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.617557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.617817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.617865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.618093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.618359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.618388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.618648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.618897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.618928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.619153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.619382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.619407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.619611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.619849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.619875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.620120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.620377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.620406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.620672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.620959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.620986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.621279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.621540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.621568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.621891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.622104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.622131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.622391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.622648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.622676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.622990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.623257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.623285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.623546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.623764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.623802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.624083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.624410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.624438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.624762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.625040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.625067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.625349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.625643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.625679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.625961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.626210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.626238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.626509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.626744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.626773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.627018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.627316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.627345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.627630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.627896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.627922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.628193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.628493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.628540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.628853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.629095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.629123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.629387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.629688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.629735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.629969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.630219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.630250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.630561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.630860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.630886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.631154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.631437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.631464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.631807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.632077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.632128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.632390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.632723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.632773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.633041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.633373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.633427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.633691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.633968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.633994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.634238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.634504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.634531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.634804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.635050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.635092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.635426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.635756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.635813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.636076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.636368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.636396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.636686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.636957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.636982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.637228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.637653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.637702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.637976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.638202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.567 [2024-07-20 17:22:31.638229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.567 qpair failed and we were unable to recover it. 00:30:15.567 [2024-07-20 17:22:31.638512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.638805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.638850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.639080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.639339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.639367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.639657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.639916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.639942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.640189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.640470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.640516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.640770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.641045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.641072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.641351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.641865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.641893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.642129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.642394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.642421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.642711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.643016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.643042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.643348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.643612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.643652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.643995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.644546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.644599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.644858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.645099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.645125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.645426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.645880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.645907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.646147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.646416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.646444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.646710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.646969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.646994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.647276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.647567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.647595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.647868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.648102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.648130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.648422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.648707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.648733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.648983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.649226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.649254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.649543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.649787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.649851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.650062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.650337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.650365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.650618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.650861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.650887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.651110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.651395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.651446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.651712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.651949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.651975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.652277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.652626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.652669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.652937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.653221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.653250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.653510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.653776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.653812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.654090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.654440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.654475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.654788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.655009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.655037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.655389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.655700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.655725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.656027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.656314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.656359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.656623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.656920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.656946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.657191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.657424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.657454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.657725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.657982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.658011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.658270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.658755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.658814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.659076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.659561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.659614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.659878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.660159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.660184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.660577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.660880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.660908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.661191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.661578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.661633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.661892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.662153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.662183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.662728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.663022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.663063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.663352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.663577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.663603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.663869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.664138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.664167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.664417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.664897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.664926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.665168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.665598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.665649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.665932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.666196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.666232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.666493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.666805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.666833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.667092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.667581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.667635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.667915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.668209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.668238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.668586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.668943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.668984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.669253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.669861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.669890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.670153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.670449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.670494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.670823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.671088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.671129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.671429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.671909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.671938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.672225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.672563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.672610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.672905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.673179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.673213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.673574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.673833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.673863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.674128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.674403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.674433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.674705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.674983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.675013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.675286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.675592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.675637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.675937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.676253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.676302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.676572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.676846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.676874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.677149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.677421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.677468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.677840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.678102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.678128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.678441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.678922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.678951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.679213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.679493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.679525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.568 [2024-07-20 17:22:31.679811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.680070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.568 [2024-07-20 17:22:31.680101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.568 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.680366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.680781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.680841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.681078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.681621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.681672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.681928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.682195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.682223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.682520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.682951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.682980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.683264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.683558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.683609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.683872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.684155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.684183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.684469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.684767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.684822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.685070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.685290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.685314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.685618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.685880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.685915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.686140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.686426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.686455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.686693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.686943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.686970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.687201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.687516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.687544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.687844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.688110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.688139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.688428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.688901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.688930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.689216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.689679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.689725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.690028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.690459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.690510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.690786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.691202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.691234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.691675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.692013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.692043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.692309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.692867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.692896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.693219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.693485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.693516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.693783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.694059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.694087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.694630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.694951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.694981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.695252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.695577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.695604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.695897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.696176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.696204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.696435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.696693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.696721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.697014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.697426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.697514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.697813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.698264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.698309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.698548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.698826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.698858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.699118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.699420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.699470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.699781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.700105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.700131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.700466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.700961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.700990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.701247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.701674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.701706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.569 [2024-07-20 17:22:31.702025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.702519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.569 [2024-07-20 17:22:31.702571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.569 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.702801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.703049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.703079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.703406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.703667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.703696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.703959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.704250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.704277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.704571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.704850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.704879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.705390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.705877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.705907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.706208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.706552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.706584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.706887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.707142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.707171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.707404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.707848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.707879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.708201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.708616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.708675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.708943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.709176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.709206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.709471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.709735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.709765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.710037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.710312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.710340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.710595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.710876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.710905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.711173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.711704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.711756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.712028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.712293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.712321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.712581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.712872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.712903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.713257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.713590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.713618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.713878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.714106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.714135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.832 [2024-07-20 17:22:31.714449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.714893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.832 [2024-07-20 17:22:31.714922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.832 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.715186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.715424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.715466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.715899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.716429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.716477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.716763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.717045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.717073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.717328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.717822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.717885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.718166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.718467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.718513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.718817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.719054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.719084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.719384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.719649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.719676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.719932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.720192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.720221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.720510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.720805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.720834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.721100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.721517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.721565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.721852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.722126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.722151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.722425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.722917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.722946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.723242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 668832 Killed "${NVMF_APP[@]}" "$@" 00:30:15.833 [2024-07-20 17:22:31.723638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.723683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.723948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 17:22:31 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:15.833 17:22:31 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:15.833 [2024-07-20 17:22:31.724344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.724395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 17:22:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:15.833 17:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:15.833 [2024-07-20 17:22:31.724649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:30:15.833 [2024-07-20 17:22:31.724934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.724963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.725198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.725440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.725477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.725784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.726176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.726205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.726490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.726798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.726824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.727069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.727402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.727448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.727706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.728020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.728050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 17:22:31 -- nvmf/common.sh@469 -- # nvmfpid=669533 00:30:15.833 [2024-07-20 17:22:31.728318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 17:22:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:15.833 17:22:31 -- nvmf/common.sh@470 -- # waitforlisten 669533 00:30:15.833 17:22:31 -- common/autotest_common.sh@819 -- # '[' -z 669533 ']' 00:30:15.833 [2024-07-20 17:22:31.728580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.728623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 17:22:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.833 17:22:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:15.833 [2024-07-20 17:22:31.728877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 17:22:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.833 17:22:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:15.833 [2024-07-20 17:22:31.729177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.729206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 17:22:31 -- common/autotest_common.sh@10 -- # set +x 00:30:15.833 [2024-07-20 17:22:31.729531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.729897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.729928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.730176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.730490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.730544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.730848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.731091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.731132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.731407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.731767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.731808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.732069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.732298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.732327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.732578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.732844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.732873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.733121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.733385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.733411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.733694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.733951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.733981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.734252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.734518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.734560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.734806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.735008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.735033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.735247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.735458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.735486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.735724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.736001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.736028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.833 qpair failed and we were unable to recover it. 00:30:15.833 [2024-07-20 17:22:31.736302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.833 [2024-07-20 17:22:31.736563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.736589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.736838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.737113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.737139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.737348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.737583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.737611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.737861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.738067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.738095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.738334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.738568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.738594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.738820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.739031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.739057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.739331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.739599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.739625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.739845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.740087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.740113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.740353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.740621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.740647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.740864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.741104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.741130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.741335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.741576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.741604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.741918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.742127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.742154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.742388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.742664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.742690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.742976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.743194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.743220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.743434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.743674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.743700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.743916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.744214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.744242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.744479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.744716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.744759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.745006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.745215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.745240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.745519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.745803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.745829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.746037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.746263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.746291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.746548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.746753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.746780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.747023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.747288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.747314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.747591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.747807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.747834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.748068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.748334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.748360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.748610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.748864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.748906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.749167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.749394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.749423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.749683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.749896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.749923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.750148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.750361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.750387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.750625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.750831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.750858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.751141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.751379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.751405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.751644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.751955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.751982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.752204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.752422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.752448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.752706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.752943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.752969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.753212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.753450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.753476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.753681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.753921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.753948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.754164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.754431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.754457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.834 [2024-07-20 17:22:31.754697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.754911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.834 [2024-07-20 17:22:31.754938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.834 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.755147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.755382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.755408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.755626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.755876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.755903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.756145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.756385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.756412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.756630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.756872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.756900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.757138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.757399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.757425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.757663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.757874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.757901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.758141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.758386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.758414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.758648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.758922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.758948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.759162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.759372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.759399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.759615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.759821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.759862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.760079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.760313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.760339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.760570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.760801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.760829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.761048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.761327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.761354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.761588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.761853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.761880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.762116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.762335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.762363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.762580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.762824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.762852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.763132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.763415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.763441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.763709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.763968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.763995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.764262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.764461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.764487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.764752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.764978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.765006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.765231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.765467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.765493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.765760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.765988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.766015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.766266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.766480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.766507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.766742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.767022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.767049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.767267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.767503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.767529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.767745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.767963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.767990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.768204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.768443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.768469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.768684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.768941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.768967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.769174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.769377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.769402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.769618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.769861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.769888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.770119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.770324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.770349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.770601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.770869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.770896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.771111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.771323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.771350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.835 qpair failed and we were unable to recover it. 00:30:15.835 [2024-07-20 17:22:31.771616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.771854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.835 [2024-07-20 17:22:31.771881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.772132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.772342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.772367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.772638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.772914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.772941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.773183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.773393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.773419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.773658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.773790] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:15.836 [2024-07-20 17:22:31.773864] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.836 [2024-07-20 17:22:31.773869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.773896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.774115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.774331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.774356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.774559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.774773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.774817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.775033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.775269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.775296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.775533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.775738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.775764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.776006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.776246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.776276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.776508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.776746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.776772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.777019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.777359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.777400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.777662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.777905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.777931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.778173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.778433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.778459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.778798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.779173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.779212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.779537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.779943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.779970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.780234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.780552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.780591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.780912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.781135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.781161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.781475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.781777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.781827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.782093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.782336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.782376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.782700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.782929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.782956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.783187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.783438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.783464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.783763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.784049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.784075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.784311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.784543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.784568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.784800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.785072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.785098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.785374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.785623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.785662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.785977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.786282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.786307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.786639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.786905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.786932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.787152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.787411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.787437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.836 qpair failed and we were unable to recover it. 00:30:15.836 [2024-07-20 17:22:31.787707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.787996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.836 [2024-07-20 17:22:31.788022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.788337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.788628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.788654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.788990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.789208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.789234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.789521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.789742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.789769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.790060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.790349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.790375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.790637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.790880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.790908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.791150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.791419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.791445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.791700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.791982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.792008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.792276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.792539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.792579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.792858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.793138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.793164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.793475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.793714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.793738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.794052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.794319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.794344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.794595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.794865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.794891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.795153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.795361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.795386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.795622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.795891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.795917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.796203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.796534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.796559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.796837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.797099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.797139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.797371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.797611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.797636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.797859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.798128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.798154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.798446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.798744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.798784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.799067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.799304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.799329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.799578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.799769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.799823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.800063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.800297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.800323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.800583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.800820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.800847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.801087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.801399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.801425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.801682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.801957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.801984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.802226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.802468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.802494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.802705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.802913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.802940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.803179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.803430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.803471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.803731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.803981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.804007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.804218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.804423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.804449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.804770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.805038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.805064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.805320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.805533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.805557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.805760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.806015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.806042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.837 [2024-07-20 17:22:31.806322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.806545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.837 [2024-07-20 17:22:31.806569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.837 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.806843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.807089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.807129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.807369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.807611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.807652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.807968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.808279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.808305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.808525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.808803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.808829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.809068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.809315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.809355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.809726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.810025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.810052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.810327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.810579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.810618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.810917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.811151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.811177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.811558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.811784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.811832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.812096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.812375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.812400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.812639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.812899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.812928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.813172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.813419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.813446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.813754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.814082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.814125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.814379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.814603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.814630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.814882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.815139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.815165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.815416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.815663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.815703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.815974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.816166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.816191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.816470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.816697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.816722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.816980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.838 [2024-07-20 17:22:31.817238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.817262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.817540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.817901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.817928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.818170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.818455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.818480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.818734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.819000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.819026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.819285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.819531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.819573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.819836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.820162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.820202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.820452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.820720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.820746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.821047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.821325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.821351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.821610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.821849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.821876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.822151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.822453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.822479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.822721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.822951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.822978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.823240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.823471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.823496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.823769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.824040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.824066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.824367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.824715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.824739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.825045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.825289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.825330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.825555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.825760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.825808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.826051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.826357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.826382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.826646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.826881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.826907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.838 qpair failed and we were unable to recover it. 00:30:15.838 [2024-07-20 17:22:31.827152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.838 [2024-07-20 17:22:31.827500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.827539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.827864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.828106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.828132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.828393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.828650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.828689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.829003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.829245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.829271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.829477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.829711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.829736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.829990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.830230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.830256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.830525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.830724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.830751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.831021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.831235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.831262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.831493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.831734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.831759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.831996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.832236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.832263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.832530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.832752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.832803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.833050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.833289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.833332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.833579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.833843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.833871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.834159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.834396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.834420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.834635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.834898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.834924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.835187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.835434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.835474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.835766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.836010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.836037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.836283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.836553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.836579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.836831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.837034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.837059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.837362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.837616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.837657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.837953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.838181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.838223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.838517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.838798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.838824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.839091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.839352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.839376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.839631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.839866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.839892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.840127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.840363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.840391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.839 qpair failed and we were unable to recover it. 00:30:15.839 [2024-07-20 17:22:31.840638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.839 [2024-07-20 17:22:31.840882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.840916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.841177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.841435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.841461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.841703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.841912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.841939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.842177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.842410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.842436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.842701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.842940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.842966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.843220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.843485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.843511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.843780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.844032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.844059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.844315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.844540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.844567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.844826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.845074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.845115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.845394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.845703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.845743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.846032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.846311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.846336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.846664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.846944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.846972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.847358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.847579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.847605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.847867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.848105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.848132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.848494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.848774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.848804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.849000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.849228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.849254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.849495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.849710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.849736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.849975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.850226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.850251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.850494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.850711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.850737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.851007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.851231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.851257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.851524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.851854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.851880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.852127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.852453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.852480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.852598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.840 [2024-07-20 17:22:31.852748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.852996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.853038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.853432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.853698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.853724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.853955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.854212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.854237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.854486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.854748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.854789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.855053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.855291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.855317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.855616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.855877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.855904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.856147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.856406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.856432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.856670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.856916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.856942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.857219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.857604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.857628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.857898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.858123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.858148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.858399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.858629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.858655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.858932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.859168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.859195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.859414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.859670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.859696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.840 qpair failed and we were unable to recover it. 00:30:15.840 [2024-07-20 17:22:31.859944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.860190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.840 [2024-07-20 17:22:31.860215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.860491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.860739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.860778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.861059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.861294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.861321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.861565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.861776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.861825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.862095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.862342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.862367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.862621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.862822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.862850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.863099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.863337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.863363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.863628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.863889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.863917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.864244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.864481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.864507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.864846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.865087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.865129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.865389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.865619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.865646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.865940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.866243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.866268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.866548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.866840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.866884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.867137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.867366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.867392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.867654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.867920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.867947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.868242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.868530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.868556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.868853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.869118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.869144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.869458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.869737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.869764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.870015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.870234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.870261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.870529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.870788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.870843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.871102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.871344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.871370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.871652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.871939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.871966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.872240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.872464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.872490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.872726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.872971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.872999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.873264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.873467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.873495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.873746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.874006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.874048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.874295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.874515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.874543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.874782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.875038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.875080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.875333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.875573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.875600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.875877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.876103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.876130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.876390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.876695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.876722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.877003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.877229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.877257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.877530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.877782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.877814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.878087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.878386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.878412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.878682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.879022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.879049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.879322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.879563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.879605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.841 qpair failed and we were unable to recover it. 00:30:15.841 [2024-07-20 17:22:31.879887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.880156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.841 [2024-07-20 17:22:31.880182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.880581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.880870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.880897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.881203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.881464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.881491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.881694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.881913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.881941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.882186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.882425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.882456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.882698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.882938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.882966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.883234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.883475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.883519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.883784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.884056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.884084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.884371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.884659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.884685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.884933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.885172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.885198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.885444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.885638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.885665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.886020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.886254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.886280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.886636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.886857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.886885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.887141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.887387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.887415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.887695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.887945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.887979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.888240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.888476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.888502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.888747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.889012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.889039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.889352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.889650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.889675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.889950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.890163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.890190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.890479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.890727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.890768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.891047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.891321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.891349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.891604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.891891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.891918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.892195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.892466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.892493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.892753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.893050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.893096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.893364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.893600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.893630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.893868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.894082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.894122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.894372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.894591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.894617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.894863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.895107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.895148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.895406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.895714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.895740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.895987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.896252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.896278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.896580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.896777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.896808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.897040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.897296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.897337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.897599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.897969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.897996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.898283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.898507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.898532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.898778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.899002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.899030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.899269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.899503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.899529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.899823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.900040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.900065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.900360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.900586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.900615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.842 qpair failed and we were unable to recover it. 00:30:15.842 [2024-07-20 17:22:31.900859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.842 [2024-07-20 17:22:31.901116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.901142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.901382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.901640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.901665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.901932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.902169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.902209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.902460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.902787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.902850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.903101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.903321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.903348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.903614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.903880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.903907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.904123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.904345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.904372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.904645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.904913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.904939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.905186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.905467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.905492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.905762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.906013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.906040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.906469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.906669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.906696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.906932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.907174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.907215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.907443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.907695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.907719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.907966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.908227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.908252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.908468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.908726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.908751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.909215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.909514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.909543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.909838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.910086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.910125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.910375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.910625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.910665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.910936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.911246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.911286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.911614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.911847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.911874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.912328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.912618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.912647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.912919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.913284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.913309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.913597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.913924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.913950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.914190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.914441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.914481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.914752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.914999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.915042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.915300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.915559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.915586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.915879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.916096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.916123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.916423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.916887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.916930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.917172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.917390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.917418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.917654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.917924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.917951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.843 qpair failed and we were unable to recover it. 00:30:15.843 [2024-07-20 17:22:31.918209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.918462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.843 [2024-07-20 17:22:31.918488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.918825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.919087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.919130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.919384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.919692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.919718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.919970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.920253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.920280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.920578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.920837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.920864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.921174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.921426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.921451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.921727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.921971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.921998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.922231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.922442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.922468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.922708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.922955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.922987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.923252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.923484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.923511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.923749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.924124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.924165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.924450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.924683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.924711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.925010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.925274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.925301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.925610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.926027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.926053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.926310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.926539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.926566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.926859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.927133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.927159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.927425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.927667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.927694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.927952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.928186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.928213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.928480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.928684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.928712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.928981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.929220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.929262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.929512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.929772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.929804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.930086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.930365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.930391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.930649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.930912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.930940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.931175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.931445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.931470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.931745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.931976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.932004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.932269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.932533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.932574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.932819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.933057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.933084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.933413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.933657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.933683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.933940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.934205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.934231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.934481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.934731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.934757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.935015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.935265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.935291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.935552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.935923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.935948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.936193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.936496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.936521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.936781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.937028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.937055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.937296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.937601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.937628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.844 qpair failed and we were unable to recover it. 00:30:15.844 [2024-07-20 17:22:31.937898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.938204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.844 [2024-07-20 17:22:31.938231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.938497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.938734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.938761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.938995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.939277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.939304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.939582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.939892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.939937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.940185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.940418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.940445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.940723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.940957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.940985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.941237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.941498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.941524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.941766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.942011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.942039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.942274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.942509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.942535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.942781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.943073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.943099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.943341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.943576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.943602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.943850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.944077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.944103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.944344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.944597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.944637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.944903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.945153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.945178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.945432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.945646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.945672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.945907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.946168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.946195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.946429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.946665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.946691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.946912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.947114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.947140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.947352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.947557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.947569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:15.845 [2024-07-20 17:22:31.947584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.947690] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.845 [2024-07-20 17:22:31.947710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.845 [2024-07-20 17:22:31.947724] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.845 [2024-07-20 17:22:31.947791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.947780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:15.845 [2024-07-20 17:22:31.947814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:15.845 [2024-07-20 17:22:31.947863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:15.845 [2024-07-20 17:22:31.947866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:15.845 [2024-07-20 17:22:31.948042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.948066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.948287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.948520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.948546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.948777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.949014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.949040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.949281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.949550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.949575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.949814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.950062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.950088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.950324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.950554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.950580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.950825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.951061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.951087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.951550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.951820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.951846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.952056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.952330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.952356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.952595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.952860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.952886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.953123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.953329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.953355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.953603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.953816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.953843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.954083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.954290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.954318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.954523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.954766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.954799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.955056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.955283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.955309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.845 qpair failed and we were unable to recover it. 00:30:15.845 [2024-07-20 17:22:31.955529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.845 [2024-07-20 17:22:31.955764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.955790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.956046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.956260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.956287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.956545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.956751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.956777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.957034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.957271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.957296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.957506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.957767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.957800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.957999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.958204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.958232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.958463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.958671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.958698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.958945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.959206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.959231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.959463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.959661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.959687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.959920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.960160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.960187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.960601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.960842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.960868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.961109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.961371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.961398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.961615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.961846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.961873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.962072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.962267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.962292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.962509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.962751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.962777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.963033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.963232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.963257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.963518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.963913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.963940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.964203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.964409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.964435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.964654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.964922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.964947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.965193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.965429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.965455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.965721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.965952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.965979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.966245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.966480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.966506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.966736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.966939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.966967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.967215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.967414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.967440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.967667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.967885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.967913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.968124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.968584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.968609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.968848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.969069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.969097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.969312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.969522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.969550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.969788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.970011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.970039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.970272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.970544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.970571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.970777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.971022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.971048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.971282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.971517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.971543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.971788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.972052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.972078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.972310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.972555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.972581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.846 qpair failed and we were unable to recover it. 00:30:15.846 [2024-07-20 17:22:31.972782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.973023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.846 [2024-07-20 17:22:31.973049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.973251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.973493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.973520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.973759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.973977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.974011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.974256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.974495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.974521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.974756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.975008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.975035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.975273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.975645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.975684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.975914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.976122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.976148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.976389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.976751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.976777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.977035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.977280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.977306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.977516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.977722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.977750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.978005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.978254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.978281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.978526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.978739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.978766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.978984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.979224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.979255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.979492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.979732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.979759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.980010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.980221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.980249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.980465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.980687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.980712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.980952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.981193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.981219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.981465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.981667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.981694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.981945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.982160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.982188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.982401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.982617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.982643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.982867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.983119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.983146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.983403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.983613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.983638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.983899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.984182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.984213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.984449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.984737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.984763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:15.847 [2024-07-20 17:22:31.985007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.985239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:15.847 [2024-07-20 17:22:31.985265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:15.847 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.985466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.985699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.985724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.985942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.986156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.986184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.986564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.986800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.986835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.987057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.987266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.987293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.987546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.987754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.987780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.988045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.988252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.988279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.988515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.988753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.988781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.989030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.989456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.989486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.989917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.990160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.990187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.990399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.990667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.990693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.990915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.991129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.991156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.991395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.991638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.991665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.991893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.992107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.992133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.992375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.992636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.992662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.993029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.993301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.993327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.993565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.993832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.993859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.994066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.994272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.994300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.994682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.994946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.994974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.995208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.995444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.995470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.995709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.995946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.995972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.996206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.996451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.996477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.996693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.996924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.996951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.997375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.997614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.997639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.997873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.998083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.998111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.998348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.998572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.998598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.998842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.999086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.999112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.999349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.999548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:31.999573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:31.999802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.000019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.000045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.000289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.000518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.000543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.000754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.000992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.001019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.001289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.001530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.001556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.001825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.002069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.002096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.002333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.002541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.002567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.002813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.003083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.003109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.003344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.003558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.003584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.003837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.004060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.004098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.004362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.004601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.004627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.004876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.005114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.005141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.130 qpair failed and we were unable to recover it. 00:30:16.130 [2024-07-20 17:22:32.005377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.005580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.130 [2024-07-20 17:22:32.005606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.005822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.006053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.006079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.006286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.006514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.006540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.006752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.007013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.007052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.007317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.007516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.007543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.007785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.008046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.008072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.008324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.008537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.008563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.008770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.009282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.009325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.009600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.009815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.009847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.010072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.010280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.010307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.010559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.010774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.010807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.011046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.011253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.011280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.011490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.011756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.011782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.012045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.012266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.012292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.012557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.012767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.012816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.013043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.013288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.013314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.013558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.013768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.013802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.014018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.014231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.014258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.014499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.014713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.014740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.014996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.015268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.015295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.015508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.015936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.015965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.016414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.016661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.016687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.016900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.017138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.017164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.017435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.017669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.017696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.017964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.018169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.018196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.018458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.018690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.018716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.018931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.019373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.019399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.019677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.019886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.019913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.020160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.020364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.020391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.020602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.020838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.020865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.021124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.021387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.021413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.021660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.021929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.021956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.022204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.022434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.022461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.022729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.022940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.022967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.023201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.023431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.023456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.023668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.023878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.023905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.024181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.024382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.024410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.024654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.024873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.024900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.025111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.025347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.025373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.025589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.025835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.025863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.026105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.026332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.026359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.026558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.026804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.026841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.027060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.027274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.027301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.027511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.027777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.027817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.028030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.028233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.028260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.028501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.028725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.028750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.029026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.029264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.029290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.029551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.029761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.131 [2024-07-20 17:22:32.029789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.131 qpair failed and we were unable to recover it. 00:30:16.131 [2024-07-20 17:22:32.030010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.030218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.030246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.030468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.030675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.030702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.030946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.031184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.031211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.031453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.031659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.031687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.031922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.032163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.032189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.032400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.032618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.032645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.032933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.033166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.033191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.033430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.033626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.033652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.033888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.034128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.034155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.034362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.034598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.034625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.034847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.035052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.035079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.035321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.035554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.035581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.035805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.036013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.036040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.036246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.036478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.036504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.036720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.036957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.036984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.037237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.037468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.037495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.037742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.037998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.038025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.038292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.038522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.038548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.038791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.039006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.039032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.039276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.039523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.039549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.039786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.040027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.040053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.040262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.040502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.040530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.040767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.040990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.041017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.041292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.041557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.041583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.041825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.042042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.042068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.042372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.042592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.042619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.042831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.043074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.043101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.043431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.043669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.043696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.043950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.044212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.044238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.044457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.044716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.044742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.044988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.045257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.045283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.045501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.045772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.045810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.046042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.046315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.046341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.046549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.046815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.046843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.047053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.047300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.047326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.047568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.047811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.047839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.048083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.048301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.048327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.048566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.048774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.048808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.049020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.049259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.049285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.049523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.049727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.049756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.050006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.050281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.050307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.132 qpair failed and we were unable to recover it. 00:30:16.132 [2024-07-20 17:22:32.050533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.132 [2024-07-20 17:22:32.050746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.050786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.051034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.051270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.051297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.051568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.051833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.051860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.052104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.052310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.052336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.052551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.052753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.052779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.053024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.053262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.053289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.053499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.053730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.053757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.053994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.054190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.054216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.054438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.054674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.054700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.054932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.055135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.055162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.055406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.055636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.055663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.055874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.056114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.056146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.056411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.056626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.056654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.056924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.057149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.057175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.057441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.057639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.057665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.057879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.058142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.058169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.058383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.058597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.058624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.058887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.059150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.059176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.059443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.059685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.059711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.059948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.060198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.060225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.060457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.060689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.060716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.060932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.061173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.061203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.061417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.061652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.061680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.061923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.062140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.062167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.062406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.062608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.062635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.062854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.063123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.063149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.063392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.063624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.063650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.063892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.064095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.064121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.064332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.064537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.064565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.064768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.065009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.065036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.065286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.065517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.065543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.065759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.065977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.066010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.066214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.066456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.066483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.066682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.066915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.066941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.067177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.067398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.067424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.067664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.067902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.067929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.068130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.068335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.068361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.068571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.068833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.068860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.069106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.069317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.069344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.069554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.069764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.069797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.070001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.070226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.070252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.070489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.070718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.070751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.070999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.071193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.071220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.071428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.071631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.071659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.071924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.072179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.072207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.072411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.072616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.072642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.072878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.073091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.073118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.073353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.073589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.133 [2024-07-20 17:22:32.073617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.133 qpair failed and we were unable to recover it. 00:30:16.133 [2024-07-20 17:22:32.073830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.074067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.074093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.074298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.074509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.074534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.074772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.075017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.075045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.075257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.075496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.075522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.075733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.075937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.075965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.076185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.076389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.076414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.076655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.076865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.076893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.077107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.077367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.077393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.077601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.077842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.077869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.078110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.078312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.078338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.078576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.078817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.078846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.079082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.079317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.079343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.079585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.079822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.079849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.080083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.080319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.080345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.080558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.080801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.080829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.081047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.081284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.081310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.081510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.081734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.081761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.081982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.082208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.082235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.082474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.082709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.082736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.082982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.083227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.083254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.083497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.083758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.083783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.084011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.084246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.084273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.084485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.084710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.084737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.084945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.085183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.085209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.085475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.085683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.085710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.085978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.086210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.086236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.086445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.086695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.086721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.086992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.087202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.087231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.087497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.087706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.087735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.087976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.088193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.088221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.088496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.088696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.088725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.088960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.089192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.089218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.089482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.089676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.089702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.089966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.090204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.090231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.090454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.090699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.090727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.090960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.091167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.091194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.091424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.091686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.091713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.091929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.092133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.092159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.092374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.092610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.092636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.092840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.093074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.093100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.093340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.093578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.093604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.093816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.094051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.094076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.094323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.094549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.094575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.094846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.095081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.095107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.095348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.095577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.095603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.095831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.096060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.096086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.096322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.096581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.096607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.096852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.097082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.097108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.134 qpair failed and we were unable to recover it. 00:30:16.134 [2024-07-20 17:22:32.097342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.134 [2024-07-20 17:22:32.097549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.097574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.097773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.098048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.098074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.098323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.098559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.098585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.098825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.099038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.099064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.099304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.099498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.099524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.099727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.099960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.099987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.100240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.100443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.100470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.100669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.100923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.100950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.101199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.101408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.101435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.101676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.101914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.101939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.102182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.102388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.102425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.102622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.102827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.102856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.103099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.103315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.103341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.103571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.103845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.103871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.104141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.104351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.104376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.104590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.104830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.104857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.105081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.105277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.105303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.105541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.105746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.105774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.106031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.106280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.106307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.106507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.106743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.106770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.107068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.107315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.107343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.107579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.107786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.107819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.108099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.108345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.108372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.108611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.108843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.108870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.109140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.109370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.109396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.109613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.109853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.109880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.110028] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf50100 is same with the state(5) to be set 00:30:16.135 [2024-07-20 17:22:32.110341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.110598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.110627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.110850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.111091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.111118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.111367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.111600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.111625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.111842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.112083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.112109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.112350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.112558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.112586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.112837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.113074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.113100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.113338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.113540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.113566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.113804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.114041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.114069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.114335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.114592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.114618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.114835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.115068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.115094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.115328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.115531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.115557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.115761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.116011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.116038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.116280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.116514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.116539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.116777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.117004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.117029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.117278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.117537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.117563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.117759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.118004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.118030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.118264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.118501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.135 [2024-07-20 17:22:32.118526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.135 qpair failed and we were unable to recover it. 00:30:16.135 [2024-07-20 17:22:32.118740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.118972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.118999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.119232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.119493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.119519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.119766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.120011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.120037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.120282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.120493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.120519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.120753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.120992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.121019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.121257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.121456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.121482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.121714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.121950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.121976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.122182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.122440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.122465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.122700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.122947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.122974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.123212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.123446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.123472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.123708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.123934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.123960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.124159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.124398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.124424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.124661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.124899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.124926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.125129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.125365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.125391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.125598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.125815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.125841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.126058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.126262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.126290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.126525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.126761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.126787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.127028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.127239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.127264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.127494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.127729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.127755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.127987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.128190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.128218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.128491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.128698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.128724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.128970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.129186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.129212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.129447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.129651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.129678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.129918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.130124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.130151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.130356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.130599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.130624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.130855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.131058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.131084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.131285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.131521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.131548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.131779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.132022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.132048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.132252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.132486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.132511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.132723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.132934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.132961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.133196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.133422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.133448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.133685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.133922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.133949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.134216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.134421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.134449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.134697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.134956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.134983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.135190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.135429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.135457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.135694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.135931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.135957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.136191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.136436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.136461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.136695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.136934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.136961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.137176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.137419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.137445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.137658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.137867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.137893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.138128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.138343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.138369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.138571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.138807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.138834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.139041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.139252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.139279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.139523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.139757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.139783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.140005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.140242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.140269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.140509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.140727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.140753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.141009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.141238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.141264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.141475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.141680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.136 [2024-07-20 17:22:32.141708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.136 qpair failed and we were unable to recover it. 00:30:16.136 [2024-07-20 17:22:32.141973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.142204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.142229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.142423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.142684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.142709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.142948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.143186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.143212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.143448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.143655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.143683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.143920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.144149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.144175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.144412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.144624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.144650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.144890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.145127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.145153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.145356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.145601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.145628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.145848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.146051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.146077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.146315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.146546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.146572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.146803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.147046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.147072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.147271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.147503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.147528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.147737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.147979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.148005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.148271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.148502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.148528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.148761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.149037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.149064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.149276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.149478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.149504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.149767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.150010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.150037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.150268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.150501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.150527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.150790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.151001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.151027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.151266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.151498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.151523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.151765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.152038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.152065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.152302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.152509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.152537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.152753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.153029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.153056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.153284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.153490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.153517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.153757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.153997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.154023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.154252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.154488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.154513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.154742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.154971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.154998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.155199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.155436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.155462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.155695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.155937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.155965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.156182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.156416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.156442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.156674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.156886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.156912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.157150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.157385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.157411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.157617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.157833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.157861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.158095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.158324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.158350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.158560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.158805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.158831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.159072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.159306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.159332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.159534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.159799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.159825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.160021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.160251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.160277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.160509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.160723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.160751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.161021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.161265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.161291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.161524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.161759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.161785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.162047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.162283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.162309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.162548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.162756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.162782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.163033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.163271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.163297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.163560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.163768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.163800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.164047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.164312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.164342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.164557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.164797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.164824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.137 qpair failed and we were unable to recover it. 00:30:16.137 [2024-07-20 17:22:32.165062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.165326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.137 [2024-07-20 17:22:32.165352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.165593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.165803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.165830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.166098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.166313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.166339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.166569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.166778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.166813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.167022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.167252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.167278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.167519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.167758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.167784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.167999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.168198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.168224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.168472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.168710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.168736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.168974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.169210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.169241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.169475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.169683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.169708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.169912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.170114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.170139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.170410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.170613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.170639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.170869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.171138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.171164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.171424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.171690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.171715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.171954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.172217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.172243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.172509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.172716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.172742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.172982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.173200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.173226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.173442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.173676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.173702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.173941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.174177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.174207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.174439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.174681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.174707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.174972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.175176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.175202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.175440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.175642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.175668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.175873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.176125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.176151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.176416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.176628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.176654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.176863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.177071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.177097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.177330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.177548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.177574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.177780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.178020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.178046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.178279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.178513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.178538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.178732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.178969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.178999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.179192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.179386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.179412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.179626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.179886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.179912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.180120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.180354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.180380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.180580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.180853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.180880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.181119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.181322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.181347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.181583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.181783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.181813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.182025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.182263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.182290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.182549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.182753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.182779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.183025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.183260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.183286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.183496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.183724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.183749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.183994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.184226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.184252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.184480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.184699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.184724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.184960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.185189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.185214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.185412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.185621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.138 [2024-07-20 17:22:32.185647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.138 qpair failed and we were unable to recover it. 00:30:16.138 [2024-07-20 17:22:32.185845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.186083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.186111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.186349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.186577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.186603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.186836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.187077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.187103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.187308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.187507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.187533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.187764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.188015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.188042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.188277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.188477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.188503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.188714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.188924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.188953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.189163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.189402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.189428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.189627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.189844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.189871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.190077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.190340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.190366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.190581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.190814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.190841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.191071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.191303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.191330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.191576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.191818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.191845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.192086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.192326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.192354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.192561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.192827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.192856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.193090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.193361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.193388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.193632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.193839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.193866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.194082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.194292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.194319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.194566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.194807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.194834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.195073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.195269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.195295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.195532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.195762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.195787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.196055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.196274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.196299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.196529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.196738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.196764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.197009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.197240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.197266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.197474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.197677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.197705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.197947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.198180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.198205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.198450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.198708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.198734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.198969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.199205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.199231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.199434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.199671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.199696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.199903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.200139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.200165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.200401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.200635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.200661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.200896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.201160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.201185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.201454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.201718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.201743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.201988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.202233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.202259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.202501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.202741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.202767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.203015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.203244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.203270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.203534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.203771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.203802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.204069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.204303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.204329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.204570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.204773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.204804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.205035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.205267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.205293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.205527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.205753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.205778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.206060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.206289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.206315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.206579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.206779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.206819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.207085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.207280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.207306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.207571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.207779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.207814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.208026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.208295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.208321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.208562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.208791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.208824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.209028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.209235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.139 [2024-07-20 17:22:32.209260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.139 qpair failed and we were unable to recover it. 00:30:16.139 [2024-07-20 17:22:32.209498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.209720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.209746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.209991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.210203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.210231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.210485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.210724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.210750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.211015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.211245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.211270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.211508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.211746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.211772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.211984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.212219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.212245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.212484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.212751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.212777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.213015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.213242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.213268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.213503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.213715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.213742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.213991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.214214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.214240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.214470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.214673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.214701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.214938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.215179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.215204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.215407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.215646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.215671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.215911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.216142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.216168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.216373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.216575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.216601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.216811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.217009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.217035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.217303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.217524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.217551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.217769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.218013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.218040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.218280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.218512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.218539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.218782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.218995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.219021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.219258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.219488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.219513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.219756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.219999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.220027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.220234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.220471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.220498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.220703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.220908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.220936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.221167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.221399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.221424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.221629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.221829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.221856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.222056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.222262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.222288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.222504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.222733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.222759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.223012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.223217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.223243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.223508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.223753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.223778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.224001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.224204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.224230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.224462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.224725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.224751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.225110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.225398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.225427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.225665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.225930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.225958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.226178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.226380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.226406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.226640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.226877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.226904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.227125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.227391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.227417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.227652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.227862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.227888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.228119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.228341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.228369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.228590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.228862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.228889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.229124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.229366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.229392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.229628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.229892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.229919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.230135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.230333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.230359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.230595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.230858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.230885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.231115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.231321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.231347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.231580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.231816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.231843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.232078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.232315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.232341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.232566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.232803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.232830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.140 qpair failed and we were unable to recover it. 00:30:16.140 [2024-07-20 17:22:32.233069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.140 [2024-07-20 17:22:32.233308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.233335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.233568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.233763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.233789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.234002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.234238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.234264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.234500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.234741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.234766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.235028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.235228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.235254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.235519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.235723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.235749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.235987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.236199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.236225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.236461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.236696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.236722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.236963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.237198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.237225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.237457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.237712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.237737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.237945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.238184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.238214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.238445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.238683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.238708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.238948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.239178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.239205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.239410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.239639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.239665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.239921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.240133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.240159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.240377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.240622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.240648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.240882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.241081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.241107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.241343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.241556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.241581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.241824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.242053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.242080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.242286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.242523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.242550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.242787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.243001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.243032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.243243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.243475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.243500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.243702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.243919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.243948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.244214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.244429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.244455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.244695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.244927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.244955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.245223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.245451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.245477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.245680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.245894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.245920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.246183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.246417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.246443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.246683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.246923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.246950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.247183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.247420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.247446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.247680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.247913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.247944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.248213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.248444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.248470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.248698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.248907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.248934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.249143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.249415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.249440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.249653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.249892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.249918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.250133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.250335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.250362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.250602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.250846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.250873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.251110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.251341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.251367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.251577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.251846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.251872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.252074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.252306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.252331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.252537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.252805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.252836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.253102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.253311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.253337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.253579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.253819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.253846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.141 [2024-07-20 17:22:32.254081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.254321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.141 [2024-07-20 17:22:32.254348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.141 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.254577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.254775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.254807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.255046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.255274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.255299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.255499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.255738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.255764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.256000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.256218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.256244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.256475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.256688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.256714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.256942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.257187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.257213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.257479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.257709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.257735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.257951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.258158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.258184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.258396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.258662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.258688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.258923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.259127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.259153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.259364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.259561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.259587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.259791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.260060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.260086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.260482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.260733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.260760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.260980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.261187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.261214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.261428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.261660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.261688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.261895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.262134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.262162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.262552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.262790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.262822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.263041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.263285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.263312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.263580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.263826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.263852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.142 [2024-07-20 17:22:32.264184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.264458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.142 [2024-07-20 17:22:32.264484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.142 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.264697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.264902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.264928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.265154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.265383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.265411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.265626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.265837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.265929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.266167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.266397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.266425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.266672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.266907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.266934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.267162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.267402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.267431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.267694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.267964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.267991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.268203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.268438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.268463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.268667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.268876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.268903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.269119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.269385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.269411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.269642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.269882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.269909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.270141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.270351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.270379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.270588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.270833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.270861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.271101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.271334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.271361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.271609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.271850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.271878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.272083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.272280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.272306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.272513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.272742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.272768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.273019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.273241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.273267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.273484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.273731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.273758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.274032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.274255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.274281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.274548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.274780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.274823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.275051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.275285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.275312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.275551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.275781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.275814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.276025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.276252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.276278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.276505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.276768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.276801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.414 qpair failed and we were unable to recover it. 00:30:16.414 [2024-07-20 17:22:32.277007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.414 [2024-07-20 17:22:32.277223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.277248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.277485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.277745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.277772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.278025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.278261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.278288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.278505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.278745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.278770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.278982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.279219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.279245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.279508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.279743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.279770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.280005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.280242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.280268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.280469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.280708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.280733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.280944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.281177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.281203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.281434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.281663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.281688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.281936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.282173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.282199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.282408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.282638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.282664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.282880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.283108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.283134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.283367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.283608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.283633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.283875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.284081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.284109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.284344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.284580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.284606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.284847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.285078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.285104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.285338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.285538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.285565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.285807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.286043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.286069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.286301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.286499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.286524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.286733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.286937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.286965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.287202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.287410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.287436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.287651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.287889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.287915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.288146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.288408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.288433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.288642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.288852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.288880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.289088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.289324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.289349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.289581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.289844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.289871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.290083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.290282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.290309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.290514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.290715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.290741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.290973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.291205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.291230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.291432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.291634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.291660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.291886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.292122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.292148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.292400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.292646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.292675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.292948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.293190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.293215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.293419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.293667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.293692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.293926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.294167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.294193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.294409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.294623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.294650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.294863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.295065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.295092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.295290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.295527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.295553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.415 [2024-07-20 17:22:32.295765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.296013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.415 [2024-07-20 17:22:32.296039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.415 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.296244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.296479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.296504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.296737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.296937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.296964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.297204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.297410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.297436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.297666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.297899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.297926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.298164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.298425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.298451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.298659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.298897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.298923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.299129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.299359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.299385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.299647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.299855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.299881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.300126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.300352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.300378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.300637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.300897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.300923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.301160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.301419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.301445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.301640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.301887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.301914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.302159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.302407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.302433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.302713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.302976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.303002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.303211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.303530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.303557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.303827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.304147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.304173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.304386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.304591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.304616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.304857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.305096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.305122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.305388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.305626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.305653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.305862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.306094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.306120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.306322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.306560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.306586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.306815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.307016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.307042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.307248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.307461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.307487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.307719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.307950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.307976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.308175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.308390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.308415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.308674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.308880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.308907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.309124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.309340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.309365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.309572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.309806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.309836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.310054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.310281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.310306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.310542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.310733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.310758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.311019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.311254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.311279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.311595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.311830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.311856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.312098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.312415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.312441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.312637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.312836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.312862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.313108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.313337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.313363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.313565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.313765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.313799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.314040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.314244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.314270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.314512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.314736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.314762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.416 qpair failed and we were unable to recover it. 00:30:16.416 [2024-07-20 17:22:32.315002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.416 [2024-07-20 17:22:32.315199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.315224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.315544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.315783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.315817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.316092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.316300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.316325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.316525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.316757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.316782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.316999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.317234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.317264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.317461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.317693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.317719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.317950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.318179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.318206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.318527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.318760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.318787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.319058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.319292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.319320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.319535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.319763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.319789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.320064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.320259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.320285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.320526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.320754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.320781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.320996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.321235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.321260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.321495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.321699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.321726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.321963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.322203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.322233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.322468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.322726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.322751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.322999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.323204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.323230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.323469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.323672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.323697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.323934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.324163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.324188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.324404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.324646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.324672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.324877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.325103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.325129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.325367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.325627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.325652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.325886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.326206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.326232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.326476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.326683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.326709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.326945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.327182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.327214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.327446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.327685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.327710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.327946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.328149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.328176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.328404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.328644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.328671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.328944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.329144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.329171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.329490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.329731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.329757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.417 [2024-07-20 17:22:32.329978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.330185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.417 [2024-07-20 17:22:32.330211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.417 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.330442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.330681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.330706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.330945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.331175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.331201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.331463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.331695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.331721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.331957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.332196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.332226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.332457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.332722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.332748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.333007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.333241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.333267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.333501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.333726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.333752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.333966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.334204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.334230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.334430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.334659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.334684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.335004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.335200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.335225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.335435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.335636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.335662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.335894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.336136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.336163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.336372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.336610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.336635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.336865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.337099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.337125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.337367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.337607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.337632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.337860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.338098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.338123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.338392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.338614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.338639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.338871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.339110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.339136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.339402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.339635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.339660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.339894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.340131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.340157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.340392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.340653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.340678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.340882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.341122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.341147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.341348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.341559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.341587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.341806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.342018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.342044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.342248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.342484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.342510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.342717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.342943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.342970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.343182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.343450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.343476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.343688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.343898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.343925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.344163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.344365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.344393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.344601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.344803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.344829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.345059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.345295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.345321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.345562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.345767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.345799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.346044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.346292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.346318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.346549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.346751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.346776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.347005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.347242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.347268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.347479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.347715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.347740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.347979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.348214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.348241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.348444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.348681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.348707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.418 qpair failed and we were unable to recover it. 00:30:16.418 [2024-07-20 17:22:32.348917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.418 [2024-07-20 17:22:32.349121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.349146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.349350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.349558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.349585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.349833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.350066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.350092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.350329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.350557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.350582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.350829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.351023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.351048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.351255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.351525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.351550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.351828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.352068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.352094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.352331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.352541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.352566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.352808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.353048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.353073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.353336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.353566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.353594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.353840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.354103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.354129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.354384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.354638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.354664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.354873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.355132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.355157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.355389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.355600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.355625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.355856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.356072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.356097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.356300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.356500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.356528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.356765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.357007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.357033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.357298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.357553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.357578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.357813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.358052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.358078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.358309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.358544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.358570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.358817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.359033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.359058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.359276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.359516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.359541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.359776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.360017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.360045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.360263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.360497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.360524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.360766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.360991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.361019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.361279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.361514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.361539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.361784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.362000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.362026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.362234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.362469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.362495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.362695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.362929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.362956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.363175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.363379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.363407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.363649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.363883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.363909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.364105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.364328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.364353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.364558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.364765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.364790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.365041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.365273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.365299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.365537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.365799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.365832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.366083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.366316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.366341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.366581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.366774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.366811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.367022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.367258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.367285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.367522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.367751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.367777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.368045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.368282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.419 [2024-07-20 17:22:32.368309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.419 qpair failed and we were unable to recover it. 00:30:16.419 [2024-07-20 17:22:32.368582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.368811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.368838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.369058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.369271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.369298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.369528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.369731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.369757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.369983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.370216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.370242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.370515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.370726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.370751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.370992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.371257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.371284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.371557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.371798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.371826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.372029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.372289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.372314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.372545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.372785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.372819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.373049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.373277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.373302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.373539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.373767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.373801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.374055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.374292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.374319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.374524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.374733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.374760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.375037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.375276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.375303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.375536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.375775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.375807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.376051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.376254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.376279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.376518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.376785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.376817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.377023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.377278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.377303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.377562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.377757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.377783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.378042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.378258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.378283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.378521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.378753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.378778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.379005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.379216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.379242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.379485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.379750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.379776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.379989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.380216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.380242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.380474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.380703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.380728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.380941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.381205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.381231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.381464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.381703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.381728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.381936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.382178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.382204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.382404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.382639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.382665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.382901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.383104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.383130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.383372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.383602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.383627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.383864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.384108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.384134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.384366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.384569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.384596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.384827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.385028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.385054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.385258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.385490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.385515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.385730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.385925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.385953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.386192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.386461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.386487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.386719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.386929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.386957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.387191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.387394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.387420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.387617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.387852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.387882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.388085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.388321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.420 [2024-07-20 17:22:32.388346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.420 qpair failed and we were unable to recover it. 00:30:16.420 [2024-07-20 17:22:32.388584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-07-20 17:22:32.388822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-07-20 17:22:32.388849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.421 qpair failed and we were unable to recover it. 00:30:16.421 [2024-07-20 17:22:32.389122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-07-20 17:22:32.389360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.421 [2024-07-20 17:22:32.389386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.389641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.389906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.389933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.390139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.390375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.390400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.390637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.390875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.390902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.391143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.391352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.391379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.391619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.391830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.391857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.392082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.392318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.392345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.392623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.392829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.392857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.393095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.393325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.393350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.393596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.393824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.393851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.394089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.394320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.394345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.394581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.394816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.394842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.395052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.395290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.395315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.395571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.395835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.395861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.396095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.396344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.396374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.396572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.396804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.396831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.397075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.397273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.397299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.397529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.397738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.397763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.398021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.398232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.398260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.398514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.398774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.398808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.399047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.399288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.399314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.399557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.399819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.399845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.400080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.400279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.400305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.400517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.400721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.400747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.400992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.401199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.401231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.401471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.401731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.401757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.402023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.402255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.402280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.402530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.402724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.402750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.402958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.403189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.403215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.403457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.403689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.403714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.403918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.404149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.404175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.404367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.404557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.404582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.404784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.405031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.405057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.405295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.405496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.405522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.405759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.405999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.406030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.406241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.406449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.406475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.406720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.406965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.406991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.407199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.407429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.407455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.407687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.407907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.407935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.408202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.408443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.408470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.408688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.408888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.408915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.422 qpair failed and we were unable to recover it. 00:30:16.422 [2024-07-20 17:22:32.409153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.422 [2024-07-20 17:22:32.409395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.409420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.409620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.409856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.409882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.410113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.410325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.410351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.410598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.410828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.410858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.411132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.411371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.411396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.411638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.411855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.411883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.412095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.412322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.412347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.412581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.412779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.412824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.413091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.413351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.413376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.413646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.413881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.413907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.414102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.414360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.414385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.414618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.414884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.414910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.415141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.415340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.415366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.415603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.415821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.415848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.416063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.416299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.416325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.416526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.416758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.416783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.417035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.417244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.417270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.417481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.417713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.417739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.417951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.418187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.418212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.418448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.418684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.418709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.418912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.419127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.419152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.419382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.419589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.419614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.419834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.420036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.420062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.420267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.420477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.420503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.420775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.421019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.421046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.421292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.421498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.421524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.421759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.421999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.422025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.422256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.422453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.422479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.422708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.422920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.422945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.423172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.423403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.423428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.423635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.423865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.423892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.424125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.424333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.424359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.424591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.424802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.424833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.425076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.425316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.425341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.425582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.425849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.425875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.426092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.426322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.426347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.426557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.426799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.426825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.427068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.427342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.427367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.427574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.427806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.427832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.428038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.428235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.428262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.428461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.428701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.428728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.428928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.429128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.429155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.429421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.429654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.429680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.429898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.430154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.430180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.423 qpair failed and we were unable to recover it. 00:30:16.423 [2024-07-20 17:22:32.430421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.423 [2024-07-20 17:22:32.430623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.430649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.430878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.431110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.431136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.431371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.431586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.431612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.431847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.432115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.432141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.432383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.432589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.432614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.432823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.433081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.433107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.433337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.433571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.433597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.433848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.434061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.434087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.434321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.434551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.434576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.434844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.435055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.435080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.435321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.435532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.435557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.435802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.436042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.436067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.436283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.436482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.436507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.436707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.436911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.436939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.437146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.437405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.437431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.437664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.437872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.437898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.438099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.438309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.438334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.438542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.438775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.438806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.439011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.439242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.439267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.439503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.439739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.439766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.439984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.440211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.440236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.440464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.440668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.440693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.440930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.441129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.441155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.441393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.441589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.441614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.441820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.442028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.442053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.442323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.442560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.442585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.442823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.443057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.443084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.443289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.443608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.443633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.443896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.444144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.444169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.444411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.444638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.444663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.444913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.445152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.445178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.445377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.445612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.445637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.445901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.446139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.446166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.446424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.446654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.446681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.446930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.447158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.447183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.447424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.447660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.447687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.447902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.448143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.448168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.448407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.448640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.448666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.448903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.449163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.449189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.449397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.449629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.449654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.449890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.450102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.450127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.450397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.450639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.450664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.424 qpair failed and we were unable to recover it. 00:30:16.424 [2024-07-20 17:22:32.450904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.451109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.424 [2024-07-20 17:22:32.451135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.451366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.451569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.451597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.451835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.452090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.452116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.452315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.452520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.452545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.452758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.453004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.453032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.453260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.453456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.453482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.453716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.453927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.453953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.454186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.454421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.454447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.454685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.454922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.454949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.455145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.455410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.455435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.455642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.455857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.455885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.456125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.456359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.456385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.459017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.459242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.459270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.459492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.459700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.459726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.459939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.460140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.460166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.460395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.460600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.460625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.460862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.461091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.461117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.461336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.461579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.461605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.461834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.462072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.462098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.462364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.462563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.462589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.462825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.463035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.463061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.463261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.463506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.463531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.463736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.464002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.464028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.464231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.464469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.464494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.464756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.464998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.465025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.465268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.465470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.465495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.465702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.465941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.465968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.466224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.466460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.466487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.466748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.466997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.467024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.467259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.467490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.467515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.467771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.467970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.467996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.468232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.468471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.468497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.468702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.468908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.468935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.469192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.469396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.469422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.469667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.469892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.469918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.470147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.470373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.470398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.470609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.470852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.470877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.471080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.471309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.471335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.471566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.471798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.471829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.472043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.472278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.472304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.472516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.472744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.472770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.473022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.473229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.473255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.473499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.473711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.473737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.425 qpair failed and we were unable to recover it. 00:30:16.425 [2024-07-20 17:22:32.473981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.425 [2024-07-20 17:22:32.474218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.474245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.474478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.474736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.474761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.475000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.475199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.475224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.475437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.475700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.475725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.475932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.476143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.476169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.476382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.476607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.476639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.476874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.477079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.477105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.477314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.477540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.477565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.477757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.478017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.478043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.478287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.478488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.478514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.478753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.479007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.479034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.479277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.479478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.479503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.479707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.479952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.479978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.480213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.480422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.480450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.480686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.480951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.480978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.481220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.481450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.481480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.481713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.481920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.481947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.482181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.482420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.482447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.482649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.482875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.482902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.483155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.483414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.483439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.483682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.483910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.483936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.484169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.484436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.484461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.484725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.484933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.484961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.485192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.485423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.485449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.485718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.485949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.485975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.486179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.486409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.486439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.486668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.486879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.486907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.487151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.487355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.487381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.487646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.487888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.487916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.488121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.488347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.488373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.488606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.488841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.488877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.489119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.489353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.489379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.489618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.489873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.489899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.490131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.490335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.490361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.490562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.490812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.490838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.491100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.491328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.491353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.491594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.491805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.491831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.492071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.492307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.492333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.492567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.492806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.492836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.493068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.493281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.493306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.493505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.493745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.493772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f555c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.494034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.494261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.494289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.494529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.494760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.494786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.495007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.495246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.495272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.495536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.495739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.495765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.496043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.496238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.496264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.426 qpair failed and we were unable to recover it. 00:30:16.426 [2024-07-20 17:22:32.496507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.496717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.426 [2024-07-20 17:22:32.496743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.496967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.497204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.497230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.497465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.497731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.497756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.497996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.498200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.498225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.498468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.498703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.498732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.498957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.499168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.499195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.499395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.499631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.499657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.499871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.500098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.500123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.500320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.500549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.500578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.500823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.501034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.501059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.501299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.501513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.501538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.501777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.502019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.502044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.502283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.502520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.502547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.502749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.502963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.502989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.503204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.503465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.503490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.503723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.503928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.503953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.504157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.504390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.504415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.504682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.504898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.504925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.505134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.505337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.505362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.505570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.505774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.505808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.506066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.506322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.506347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.506583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.506818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.506844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.507063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.507294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.507319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.507557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.507759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.507786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.508008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.508236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.508261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.508454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.508691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.508715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.508928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.509132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.509159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.509399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.509666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.509691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.509929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.510191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.510216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.510430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.510641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.510666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.510877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.511113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.511138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.511345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.511556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.511581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.511781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.511997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.512023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.512254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.512492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.512517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.512757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.512978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.513005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.513217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.513422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.513447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.513682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.513919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.513945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.514197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.514426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.514451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.514658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.514902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.514929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.515165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.515406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.515433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.515649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.515880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.515907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.516147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.516349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.516376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.427 [2024-07-20 17:22:32.516583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.516784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.427 [2024-07-20 17:22:32.516816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.427 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.517048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.517277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.517302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.517508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.517716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.517742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.517962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.518203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.518228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.518440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.518677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.518703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.518933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.519153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.519178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.519407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.519605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.519630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.519861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.520091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.520117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.520361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.520589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.520614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.520857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.521064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.521089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.521301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.521503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.521528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.521763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.522015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.522041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.522303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.522539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.522564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.522774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.522992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.523018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.523225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.523449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.523474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.523707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.523971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.523997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.524259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.524494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.524519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.524762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.524978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.525004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.525224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.525432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.525456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.525662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.525892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.525918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.526131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.526369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.526394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.526603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.526841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.526868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.527077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.527274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.527298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.527534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.527768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.527806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.528044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.528282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.528307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.528549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.528777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.528809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.529015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.529255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.529281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.529487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.529724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.529749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.529962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.530208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.530233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.530466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.530670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.530695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.530929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.531133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.531158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.531390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.531649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.531674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.531890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.532153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.532178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.532405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.532669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.532694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.532906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.533107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.533132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.533395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.533621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.533645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.533851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.534088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.534113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.534320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.534543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.534568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.534771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.534987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.535013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.535256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.535491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.535517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.535720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.535976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.536002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.536204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.536412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.536437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.536663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.536866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.536893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.537123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.537326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.537352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.537611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.537854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.537880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.538080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.538307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.538332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.538540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.538809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.428 [2024-07-20 17:22:32.538834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.428 qpair failed and we were unable to recover it. 00:30:16.428 [2024-07-20 17:22:32.539083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.539313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.539338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.539543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.539776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.539814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.540022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.540256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.540281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.540486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.540723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.540750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.540971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.541181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.541208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.541448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.541681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.541706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.541936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.542139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.542164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.542426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.542658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.542684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.542909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.543141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.543166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.543404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.543609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.543634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.543869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.544069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.544095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.544330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.544547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.544574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.544815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.545049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.545075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.545308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.545508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.545533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.545739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.545973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.545999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.546236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.546453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.546479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.546687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.546927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.546953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.547188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.547398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.547422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.547661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.547895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.547921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.548155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.548357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.548382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.548582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.548818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.548844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.549050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.549280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.549309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.549571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.549770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.549802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.550036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.550263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.550288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.550494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.550699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.550725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.550945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.551151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.551176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.551410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.551618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.551643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.551886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.552133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.552159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.552404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.552648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.552674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.552904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.553136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.553161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.553373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.553583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.553609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.553819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.554030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.554059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.554267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.554480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.554508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.554884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.555126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.555151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.555366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.555599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.555624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.555834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.556144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.556171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.556413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.556651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.556676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.556924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.557162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.557187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.557425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.557753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.429 [2024-07-20 17:22:32.557781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.429 qpair failed and we were unable to recover it. 00:30:16.429 [2024-07-20 17:22:32.558004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.558245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.558271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.693 [2024-07-20 17:22:32.558508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.558781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.558816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.693 [2024-07-20 17:22:32.559060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.559308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.559340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.693 [2024-07-20 17:22:32.559550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.559784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.559820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.693 [2024-07-20 17:22:32.560068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.560331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.560358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.693 [2024-07-20 17:22:32.560567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.560817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.560845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.693 [2024-07-20 17:22:32.561107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.561349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.693 [2024-07-20 17:22:32.561375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.693 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.561580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.561818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.561845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.562085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.562316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.562342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.562561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.562797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.562824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.563026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.563259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.563285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.563483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.563684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.563709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.563959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.564213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.564243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.564476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.564685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.564710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.564933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.565162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.565189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.565505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.565739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.565765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.566011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.566216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.566240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.566444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.566677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.566702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.566933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.567142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.567167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.567399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.567600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.567626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.567867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.568105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.568131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.568400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.568631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.568656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.568873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.569108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.569134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.569398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.569658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.569683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.569892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.570155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.570180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.570380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.570618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.570643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.570905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.571102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.571128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.571367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.571600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.571626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.571889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.572096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.572122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.572440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.572682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.572707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.572944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.573173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.573199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.573410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.573641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.573666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.573879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.574144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.574170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.574378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.574586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.574611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.574849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.575112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.575137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.575351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.575582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.575607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.575843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.576047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.576073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.694 qpair failed and we were unable to recover it. 00:30:16.694 [2024-07-20 17:22:32.576337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.694 [2024-07-20 17:22:32.576531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.576556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.576800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.577015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.577042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.577275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.577478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.577504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.577708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.577916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.577942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.578181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.578393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.578420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.578657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.578892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.578918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.579137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.579343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.579370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.579609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.579843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.579868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.580106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.580341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.580366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.580606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.580836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.580862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.581128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.581393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.581418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.581624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.581836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.581862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.582079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.582311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.582337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.582597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.582862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.582888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.583102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.583296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.583321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.583527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.583758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.583783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.584013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.584278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.584304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.584540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.584769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.584798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.585044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.585255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.585281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.585483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.585685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.585712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.585943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.586178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.586203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.586444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.586702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.586727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.586976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.587214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.587239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.587482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.587683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.587707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.587943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.588173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.588199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.588396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.588633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.588658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.588900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.589231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.589259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.589493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.589702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.589726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.590024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.590226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.590253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.590509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.590800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.590828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.591030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.591243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.591269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.695 qpair failed and we were unable to recover it. 00:30:16.695 [2024-07-20 17:22:32.591473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.695 [2024-07-20 17:22:32.591704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.591729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.591974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.592189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.592215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.592474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.592680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.592705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.592969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.593176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.593202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.593462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.593659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.593684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.593905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.594116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.594141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.594350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.594607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.594631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.594835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.595071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.595097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.595335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.595570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.595595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.595804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.596007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.596032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.596260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.596466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.596491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.596694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.596927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.596954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.597195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.597392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.597417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.597658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.597897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.597924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.598130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.598335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.598362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.598603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.598870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.598896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.599111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.599324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.599348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.599556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.599798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.599824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.600058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.600297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.600322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.600528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.600769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.600810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.601046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.601297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.601322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.601563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.601801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.601826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.602045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.602256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.602282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.602489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.602720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.602745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.603007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.603215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.603240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.603480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.603695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.603721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.603955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.604184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.604209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.604408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.604637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.604662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.604870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.605079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.605103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.605334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.605537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.605562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.605766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.606003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.696 [2024-07-20 17:22:32.606029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.696 qpair failed and we were unable to recover it. 00:30:16.696 [2024-07-20 17:22:32.606238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.606473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.606498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.606731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.606974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.607001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.607241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.607448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.607475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.607703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.607907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.607932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.608141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.608374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.608400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.608607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.608838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.608864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.609126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.609342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.609370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.609607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.609821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.609847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.610050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.610278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.610303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.610509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.610776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.610808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.611075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.611311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.611336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.611546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.611783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.611816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.612032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.612275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.612301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.612532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.612747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.612774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.613030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.613266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.613291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.613495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.613731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.613757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.613968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.614202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.614226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.614467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.614694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.614718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.614950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.615175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.615200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.615413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.615661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.615687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.615928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.616160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.616185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.616417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.616618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.616645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.616861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.617103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.617128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.617368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.617576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.617602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.617842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.618080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.618110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.618315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.618517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.618544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.618757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.618998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.619026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.619277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.619488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.619514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.697 qpair failed and we were unable to recover it. 00:30:16.697 [2024-07-20 17:22:32.619719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.619973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.697 [2024-07-20 17:22:32.620001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.620243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.620442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.620468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.620708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.620917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.620943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.621144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.621369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.621394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.621627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.621837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.621863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.622105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.622317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.622342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.622554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.622773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.622815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.623056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.623287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.623314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.623585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.623785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.623817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.624036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.624242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.624268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.624517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.624723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.624749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.624955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.625163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.625189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.625450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.625659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.625686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.625942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.626172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.626197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.626413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.626645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.626671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.626936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.627149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.627174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.627389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.627620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.627650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.627892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.628130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.628156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.628360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.628568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.628595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.628808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.629012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.629037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.629295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.629533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.629558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.629769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.630018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.630045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.630289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.630517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.630542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.630742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.630986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.631012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.631210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.631447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.631472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.631676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.631887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.631914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.632126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.632334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.632364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.632598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.632807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.632833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.698 qpair failed and we were unable to recover it. 00:30:16.698 [2024-07-20 17:22:32.633070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.698 [2024-07-20 17:22:32.633305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.633331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.633536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.633739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.633765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.634010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.634211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.634237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.634498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.634695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.634720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.634925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.635139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.635165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.635382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.635614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.635641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.635855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.636096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.636122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.636358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.636566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.636591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.636812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.637021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.637051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.637299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.637523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.637548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.637819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.638022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.638048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.638287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.638553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.638579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.638789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.639012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.639037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.639246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.639455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.639483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.639718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.639950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.639976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.640212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.640479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.640504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.640716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.640934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.640961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.641176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.641377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.641402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.641636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.641867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.641894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.642101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.642342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.642368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.642571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.642802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.642827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.643034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.643269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.643295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.643501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.643735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.643760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.643979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.644223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.644249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.644508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.644705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.644731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.644979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.645216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.645242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.645484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.645689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.645714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.645933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.646170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.646195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.646464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.646678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.646702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.646942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.647140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.647165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.647361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.647595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.699 [2024-07-20 17:22:32.647620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.699 qpair failed and we were unable to recover it. 00:30:16.699 [2024-07-20 17:22:32.647830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.648073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.648102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.648317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.648546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.648572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.648769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.649050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.649075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.649287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.649546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.649571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.649777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.649996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.650022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.650270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.650508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.650535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.650782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.651021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.651047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.651274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.651510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.651535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.651749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.651969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.651996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.652269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.652505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.652532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.652743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.652991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.653017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.653231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.653496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.653521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.653753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.653999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.654026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.654234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.654444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.654470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.654692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.654903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.654929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.655139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.655340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.655366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.655600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.655832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.655858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.656067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.656268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.656294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.656504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.656737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.656762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.657005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.657238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.657265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.657499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.657729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.657755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.657973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.658205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.658230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.658426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.658630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.658654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.658890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.659119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.659145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.659358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.659571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.659598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.659814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.660050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.660076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.660282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.660548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.660573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.660816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.661054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.661080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.661312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.661542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.661568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.661831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.662037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.662062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.662261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.662499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.700 [2024-07-20 17:22:32.662523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.700 qpair failed and we were unable to recover it. 00:30:16.700 [2024-07-20 17:22:32.662753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.662960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.662986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.663207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.663444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.663469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.663706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.663915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.663941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.664204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.664450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.664475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.664718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.664927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.664954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.665188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.665456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.665481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.665718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.665931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.665957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.666206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.666421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.666448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.666662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.666882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.666908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.667113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.667337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.667363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.667600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.667814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.667841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.668057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.668263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.668290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.668527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.668759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.668784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.669050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.669256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.669281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.669521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.669723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.669748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.669963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.670198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.670223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.670486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.670718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.670743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.670995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.671226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.671251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.671482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.671709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.671733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.671974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.672178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.672205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.672407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.672635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.672660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.672893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.673130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.673156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.673418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.673623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.673648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.673907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.674122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.674149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.674362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.674627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.674652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.674893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.675125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.675151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.675363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.675599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.675624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.675860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.676092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.676117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.676350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.676561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.676585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.676783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.676985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.677011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.701 [2024-07-20 17:22:32.677271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.677469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.701 [2024-07-20 17:22:32.677495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.701 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.677731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.677986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.678012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.678222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.678430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.678455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.678719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.678953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.678979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.679185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.679420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.679446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.679657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.679866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.679894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.680111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.680312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.680338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.680540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.680754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.680780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.681010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.681220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.681247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.681479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.681687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.681714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.681957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.682163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.682189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.682417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.682656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.682681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.682911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.683109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.683134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.683333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.683558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.683582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.683827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.684057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.684081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.684328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.684563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.684588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.684788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.685027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.685053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.685262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.685475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.685500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.685740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.685965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.685991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.686204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.686412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.686438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.686670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.686909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.686933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.687143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.687381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.687407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.687646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.687851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.687877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.688086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.688319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.688344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.688577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.688815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.688843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.689079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.689276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.689301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.689530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.689757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.689783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.702 [2024-07-20 17:22:32.690027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.690266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.702 [2024-07-20 17:22:32.690291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.702 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.690494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.690695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.690720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.690928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.691152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.691178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.691413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.691646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.691681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.691887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.692081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.692106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.692346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.692571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.692596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.692838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.693045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.693070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.693302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.693497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.693522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.693727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.693934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.693960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.694192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.694395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.694421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.694690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.694943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.694970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.695179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.695439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.695464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.695703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.695925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.695952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.696205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.696440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.696466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.696702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.696936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.696963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.697228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.697440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.697467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.697712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.697958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.697985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.698206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.698444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.698471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.698683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.698895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.698921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.699183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.699387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.699412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.699607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.699872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.699903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.700159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.700392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.700418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.700651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.700872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.700898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.701133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.701346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.701372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.701595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.701815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.701841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.702084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.702295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.702322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.702555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.702806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.702833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.703079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.703321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.703347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.703580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.703818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.703845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.704081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.704298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.704324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.704557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.704791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.704827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.703 qpair failed and we were unable to recover it. 00:30:16.703 [2024-07-20 17:22:32.705045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.705253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.703 [2024-07-20 17:22:32.705280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.705852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.706545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.706596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.706864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.707077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.707103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.707315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.707576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.707603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.707837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.708049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.708076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.708278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.708521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.708546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.708797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.709033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.709059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.709278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.709515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.709541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.709751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.709957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.709984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.710218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.710454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.710484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.710720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.710929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.710955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.711158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.711369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.711396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.711629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.711862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.711889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.712123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.712357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.712383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.712595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.712825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.712851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.713065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.713296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.713321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.713532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.713758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.713784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.714023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.714273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.714300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.714499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.714698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.714725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.714964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.715199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.715229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.715439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.715674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.715700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.715938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.716136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.716162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.716375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.716604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.716640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.716908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.717139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.717164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.717422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.717629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.717655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.717890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.718099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.718126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.718338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.718584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.718610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.718810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.719009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.719035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.719302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.719513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.719539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.719757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.720007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.720033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.720253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.720483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.720510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.704 qpair failed and we were unable to recover it. 00:30:16.704 [2024-07-20 17:22:32.720723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.720965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.704 [2024-07-20 17:22:32.720991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.721230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.721431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.721457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.721672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.721872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.721898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.722110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.722309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.722334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.722567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.722774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.722806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.723042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.723257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.723283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.723521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.723755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.723782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.724018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.724250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.724275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.724549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.724787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.724825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.725037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.725385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.725410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.725644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.725852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.725879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.726092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.726327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.726361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.726578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.726847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.726873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.727105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.727350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.727375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.727585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.727806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.727832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.728036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.728275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.728300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.728518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.728718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.728744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.728965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.729194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.729220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.729506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.729769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.729803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.730019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.730259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.730285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.730535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.730772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.730804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.731009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.731240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.731266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.731502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.731735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.731759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.732002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.732209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.732234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.732500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.732738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.732772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.733020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.733223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.733249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.733455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.733694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.733720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.733929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.734201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.734227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.734456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.734656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.734681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.734907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.735150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.735176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.735398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.735631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.735657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.705 qpair failed and we were unable to recover it. 00:30:16.705 [2024-07-20 17:22:32.735870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.705 [2024-07-20 17:22:32.736109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.736136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.736345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.736579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.736605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.736864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.737102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.737130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.737398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.737594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.737625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.737943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.738144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.738169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.738411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.738621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.738646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.738884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.739077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.739114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.739378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.739621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.739645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.739896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.740140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.740167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.740413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.740647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.740673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.740913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.741125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.741152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.741387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.741597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.741624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.741831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.742042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.742067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.742290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.742520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.742545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.742774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.742987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.743014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.743232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 17:22:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:16.706 [2024-07-20 17:22:32.743485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.743511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 17:22:32 -- common/autotest_common.sh@852 -- # return 0 00:30:16.706 17:22:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:16.706 [2024-07-20 17:22:32.743754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 17:22:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:16.706 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.706 [2024-07-20 17:22:32.743959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.743985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.744197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.744410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.744436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.744641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.744862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.744888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.745121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.745353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.745379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.745587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.745786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.745836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.746044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.746282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.746308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.746589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.746823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.746849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.747084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.747340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.747366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.747609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.747818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.747844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.748050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.748298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.748323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.706 qpair failed and we were unable to recover it. 00:30:16.706 [2024-07-20 17:22:32.748557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.706 [2024-07-20 17:22:32.748825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.748850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.749060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.749335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.749361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.749621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.749857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.749883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.750099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.750336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.750364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.750606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.750873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.750900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.751102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.751362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.751387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.751634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.751838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.751864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.752097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.752335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.752370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.752585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.752782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.752813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.753055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.753301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.753326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.753535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.753773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.753804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.754017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.754254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.754280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.754517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.754759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.754786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.755041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.755244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.755272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.755512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.756373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.756416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.756651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.756863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.756891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.757110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.757340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.757365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.757570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.757766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.757798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.758038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.758238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.758265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.758538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.759160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.759204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.759472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.759680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.759705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.759954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.760175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.760216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.760438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.760698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.760733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.760974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.761206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.761232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 17:22:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.707 [2024-07-20 17:22:32.761446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 17:22:32 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:16.707 17:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.707 [2024-07-20 17:22:32.761655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.761681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.707 [2024-07-20 17:22:32.761898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.762112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.762137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.762391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.762627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.762652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.762867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.763070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.763096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.763299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.763513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.763541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.763748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.763996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.764022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.707 qpair failed and we were unable to recover it. 00:30:16.707 [2024-07-20 17:22:32.764224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.707 [2024-07-20 17:22:32.764442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.764478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.764725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.764944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.764971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.765214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.765471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.765497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.765703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.765916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.765943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.766165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.766374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.766399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.766696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.766947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.766973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.767195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.767403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.767428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.767664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.767887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.767914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.768155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.768401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.768425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.768638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.768844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.768870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.769076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.769308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.769333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.769571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.769775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.769807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.770025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.770232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.770258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.770626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.771021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.771047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.771290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.771525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.771550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.771931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.772176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.772201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.772440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.772651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.772677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.772887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.773130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.773155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.773407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.773648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.773673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.773912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.774151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.774176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.774414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.774655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.774680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.774935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.775138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.775164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.775401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.775654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.775679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.775916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.776122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.776156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.776380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.776612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.776637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.776881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.777090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.777116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.777381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.777600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.777628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.777833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.778044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.778069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.778283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.778485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.778520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.778766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.778988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.779015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.779263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.779518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.708 [2024-07-20 17:22:32.779543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.708 qpair failed and we were unable to recover it. 00:30:16.708 [2024-07-20 17:22:32.779791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.779997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.780023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.780260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.780498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.780524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.780769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.781022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.781048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.781314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.781543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.781568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.781774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.782025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.782052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.782339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.782603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.782628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.782842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.783053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.783080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.783300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.783562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.783587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.783805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.784019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.784046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.784267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.784498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.784524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.784773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.784990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.785015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.785218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.785432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.785458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.785671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.785941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.785967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.786186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.786646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.786674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.786924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.787172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 Malloc0 00:30:16.709 [2024-07-20 17:22:32.787199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.787451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 17:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.709 [2024-07-20 17:22:32.787649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.787675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 17:22:32 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 17:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.709 [2024-07-20 17:22:32.787925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.709 [2024-07-20 17:22:32.788131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.788156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.788371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.788589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.788614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.788849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.789064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.789090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.789333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.789541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.789577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.789832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.790068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.790101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.790314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.790533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.790559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.790764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.790906] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.709 [2024-07-20 17:22:32.790988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.791013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.791252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.791475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.791501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.791708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.791917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.791943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.792178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.792412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.792437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.792685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.792975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.793001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.793322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.793561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.793586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.793809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.794053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.794079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.709 qpair failed and we were unable to recover it. 00:30:16.709 [2024-07-20 17:22:32.794287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.709 [2024-07-20 17:22:32.794499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.794524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.794755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.795067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.795093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.795339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.795583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.795609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.795855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.796070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.796095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.796314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.796545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.796582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.796799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.797004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.797031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.797306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.797541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.797566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.797813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.798097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.798123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.798373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.798600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.798625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.798833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.799049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.799075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 17:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.710 17:22:32 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.710 [2024-07-20 17:22:32.799288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 17:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.710 [2024-07-20 17:22:32.799501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.710 [2024-07-20 17:22:32.799525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.799765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.800028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.800055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.800274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.800497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.800523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.800729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.800941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.800968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.801176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.801419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.801444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.801682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.801889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.801917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.802131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.802342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.802368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.802579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.802826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.802852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.803058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.803269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.803294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.803531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.803745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.803774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.804037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.804266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.804291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.804500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.804724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.804750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.804961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.805188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.805212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.805463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.805671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.805698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.805936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.806132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.806157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.806371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.806616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.806641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.710 qpair failed and we were unable to recover it. 00:30:16.710 [2024-07-20 17:22:32.806889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 17:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.710 [2024-07-20 17:22:32.807120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.710 [2024-07-20 17:22:32.807145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 17:22:32 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:16.711 17:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.711 [2024-07-20 17:22:32.807376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.711 [2024-07-20 17:22:32.807633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.807660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.807923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.808128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.808153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.808365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.808605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.808630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.808872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.809072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.809100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.809343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.809575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.809599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.809832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.810035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.810061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.810297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.810539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.810564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.810772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.810987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.811013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.811246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.811483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.811508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.811752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.812017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.812043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.812291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.812530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.812554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.812763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.812998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.813024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.813243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.813478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.813503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.813736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.813953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.813980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.814208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.814439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.814464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.814671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.814916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.814942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 17:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.711 [2024-07-20 17:22:32.815151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 17:22:32 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.711 [2024-07-20 17:22:32.815385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 17:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.711 [2024-07-20 17:22:32.815410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.711 [2024-07-20 17:22:32.815621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.815835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.815860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.816081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.816322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.816348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.816588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.816818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.816844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.817076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.817278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.817303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.817546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.817782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.817822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.818035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.818302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.818328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.818560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.818768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.818801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f554c000b90 with addr=10.0.0.2, port=4420 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.819009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.711 [2024-07-20 17:22:32.819133] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.711 [2024-07-20 17:22:32.822228] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:30:16.711 [2024-07-20 17:22:32.822304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f554c000b90 (107): Transport endpoint is not connected 00:30:16.711 [2024-07-20 17:22:32.822377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 17:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.711 17:22:32 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:16.711 17:22:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:16.711 17:22:32 -- common/autotest_common.sh@10 -- # set +x 00:30:16.711 17:22:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:16.711 17:22:32 -- host/target_disconnect.sh@58 -- # wait 668987 00:30:16.711 [2024-07-20 17:22:32.831686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.711 [2024-07-20 17:22:32.831959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.711 [2024-07-20 17:22:32.831986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.711 [2024-07-20 17:22:32.832002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.711 [2024-07-20 17:22:32.832015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f554c000b90 00:30:16.711 [2024-07-20 17:22:32.832046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.711 qpair failed and we were unable to recover it. 00:30:16.711 [2024-07-20 17:22:32.841587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.712 [2024-07-20 17:22:32.841805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.712 [2024-07-20 17:22:32.841832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.712 [2024-07-20 17:22:32.841847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.712 [2024-07-20 17:22:32.841861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f554c000b90 00:30:16.712 [2024-07-20 17:22:32.841891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:16.712 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.851541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.851759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.851809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.851830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.851844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.851877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.861603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.861833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.861861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.861878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.861893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.861924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.871647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.871878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.871906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.871921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.871934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.871967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.881702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.881925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.881953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.881971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.881985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.882015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.891686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.891911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.891938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.891959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.891972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.892003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.901690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.901904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.901930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.901945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.901959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.901988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.911730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.911958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.911984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.911999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.912013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.912043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.921810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.922028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.922054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.922069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.922083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.922115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.931760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.932030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.932056] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.932071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.932084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.932114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.941807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.942021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.942047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.942062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.942075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.942106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.951836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.952089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.952115] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.952129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.952143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.952173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.961842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.962047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.962073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.962088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.962101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.962131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.971899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.971 [2024-07-20 17:22:32.972156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.971 [2024-07-20 17:22:32.972182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.971 [2024-07-20 17:22:32.972196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.971 [2024-07-20 17:22:32.972209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.971 [2024-07-20 17:22:32.972238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.971 qpair failed and we were unable to recover it. 00:30:16.971 [2024-07-20 17:22:32.982004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:32.982227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:32.982260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:32.982275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:32.982288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:32.982317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:32.992051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:32.992284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:32.992309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:32.992323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:32.992336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:32.992366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.002054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.002269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.002296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.002310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.002323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.002353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.012038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.012297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.012323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.012338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.012351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.012381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.022024] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.022235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.022261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.022276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.022289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.022318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.032091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.032343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.032369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.032384] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.032397] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.032426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.042217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.042427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.042453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.042468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.042480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.042511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.052127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.052342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.052368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.052385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.052399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.052429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.062154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.062366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.062392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.062407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.062420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.062448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.072200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.072402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.072432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.072447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.072460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.072489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.082199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.082399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.082424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.082439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.082452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.082482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.092187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.092411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.092436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.092451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.092464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.092493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.102313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.102542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.102568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.102583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.102596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.102626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.112328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.112550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.112575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.112589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.112603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.972 [2024-07-20 17:22:33.112638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.972 qpair failed and we were unable to recover it. 00:30:16.972 [2024-07-20 17:22:33.122338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.972 [2024-07-20 17:22:33.122551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.972 [2024-07-20 17:22:33.122578] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.972 [2024-07-20 17:22:33.122592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.972 [2024-07-20 17:22:33.122606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:16.973 [2024-07-20 17:22:33.122635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.973 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.132321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.230 [2024-07-20 17:22:33.132528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.230 [2024-07-20 17:22:33.132554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.230 [2024-07-20 17:22:33.132568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.230 [2024-07-20 17:22:33.132580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:17.230 [2024-07-20 17:22:33.132610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.230 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.142394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.230 [2024-07-20 17:22:33.142649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.230 [2024-07-20 17:22:33.142675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.230 [2024-07-20 17:22:33.142689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.230 [2024-07-20 17:22:33.142702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:17.230 [2024-07-20 17:22:33.142733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.230 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.152474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.230 [2024-07-20 17:22:33.152735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.230 [2024-07-20 17:22:33.152761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.230 [2024-07-20 17:22:33.152775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.230 [2024-07-20 17:22:33.152789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:17.230 [2024-07-20 17:22:33.152827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.230 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.162474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.230 [2024-07-20 17:22:33.162691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.230 [2024-07-20 17:22:33.162722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.230 [2024-07-20 17:22:33.162738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.230 [2024-07-20 17:22:33.162751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:17.230 [2024-07-20 17:22:33.162781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.230 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.172495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.230 [2024-07-20 17:22:33.172736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.230 [2024-07-20 17:22:33.172761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.230 [2024-07-20 17:22:33.172776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.230 [2024-07-20 17:22:33.172789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:17.230 [2024-07-20 17:22:33.172827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.230 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.182626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.230 [2024-07-20 17:22:33.182863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.230 [2024-07-20 17:22:33.182890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.230 [2024-07-20 17:22:33.182904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.230 [2024-07-20 17:22:33.182917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5554000b90 00:30:17.230 [2024-07-20 17:22:33.182948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:17.230 qpair failed and we were unable to recover it. 00:30:17.230 [2024-07-20 17:22:33.182988] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:17.231 A controller has encountered a failure and is being reset. 00:30:17.231 [2024-07-20 17:22:33.183047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf50100 (9): Bad file descriptor 00:30:17.231 Controller properly reset. 00:30:22.487 Initializing NVMe Controllers 00:30:22.487 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:22.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:22.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:22.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:22.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:22.487 Initialization complete. Launching workers. 00:30:22.487 Starting thread on core 1 00:30:22.487 Starting thread on core 2 00:30:22.487 Starting thread on core 3 00:30:22.487 Starting thread on core 0 00:30:22.487 17:22:38 -- host/target_disconnect.sh@59 -- # sync 00:30:22.487 00:30:22.487 real 0m11.459s 00:30:22.487 user 0m32.327s 00:30:22.487 sys 0m7.011s 00:30:22.487 17:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.487 17:22:38 -- common/autotest_common.sh@10 -- # set +x 00:30:22.487 ************************************ 00:30:22.487 END TEST nvmf_target_disconnect_tc2 00:30:22.487 ************************************ 00:30:22.487 17:22:38 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:22.487 17:22:38 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:22.487 17:22:38 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:22.487 17:22:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:22.487 17:22:38 -- nvmf/common.sh@116 -- # sync 00:30:22.487 17:22:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:22.487 17:22:38 -- nvmf/common.sh@119 -- # set +e 00:30:22.487 17:22:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:22.487 17:22:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:22.487 rmmod nvme_tcp 00:30:22.487 rmmod nvme_fabrics 00:30:22.487 rmmod nvme_keyring 00:30:22.487 17:22:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:22.487 17:22:38 -- nvmf/common.sh@123 -- # set -e 00:30:22.487 17:22:38 -- nvmf/common.sh@124 -- # return 0 00:30:22.487 17:22:38 -- nvmf/common.sh@477 -- # '[' -n 669533 ']' 00:30:22.487 17:22:38 -- nvmf/common.sh@478 -- # killprocess 669533 00:30:22.487 17:22:38 -- common/autotest_common.sh@926 -- # '[' -z 669533 ']' 00:30:22.487 17:22:38 -- common/autotest_common.sh@930 -- # kill -0 669533 00:30:22.487 17:22:38 -- common/autotest_common.sh@931 -- # uname 00:30:22.487 17:22:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:22.487 17:22:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 669533 00:30:22.487 17:22:38 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:22.487 17:22:38 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:22.487 17:22:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 669533' 00:30:22.487 killing process with pid 669533 00:30:22.487 17:22:38 -- common/autotest_common.sh@945 -- # kill 669533 00:30:22.487 17:22:38 -- common/autotest_common.sh@950 -- # wait 669533 00:30:22.487 17:22:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:22.487 17:22:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:22.487 17:22:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:22.487 17:22:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:22.487 17:22:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:22.487 17:22:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.487 17:22:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:22.487 17:22:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.388 17:22:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:24.388 00:30:24.388 real 0m16.002s 00:30:24.388 user 0m57.408s 00:30:24.388 sys 0m9.497s 00:30:24.388 17:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.388 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:24.388 ************************************ 00:30:24.388 END TEST nvmf_target_disconnect 00:30:24.388 ************************************ 00:30:24.388 17:22:40 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:24.388 17:22:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:24.388 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:24.388 17:22:40 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:24.388 00:30:24.388 real 22m25.614s 00:30:24.388 user 64m46.608s 00:30:24.388 sys 5m23.237s 00:30:24.388 17:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.388 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:24.388 ************************************ 00:30:24.388 END TEST nvmf_tcp 00:30:24.388 ************************************ 00:30:24.388 17:22:40 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:24.388 17:22:40 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:24.388 17:22:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:24.388 17:22:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:24.388 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:24.388 ************************************ 00:30:24.388 START TEST spdkcli_nvmf_tcp 00:30:24.388 ************************************ 00:30:24.388 17:22:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:24.388 * Looking for test storage... 00:30:24.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:24.388 17:22:40 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:24.388 17:22:40 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:24.388 17:22:40 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:24.388 17:22:40 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.388 17:22:40 -- nvmf/common.sh@7 -- # uname -s 00:30:24.388 17:22:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.388 17:22:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.388 17:22:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.388 17:22:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.388 17:22:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.388 17:22:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.388 17:22:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.388 17:22:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.388 17:22:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.388 17:22:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.647 17:22:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.647 17:22:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.647 17:22:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.647 17:22:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.647 17:22:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.647 17:22:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.647 17:22:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.647 17:22:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.647 17:22:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.647 17:22:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.647 17:22:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.647 17:22:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.647 17:22:40 -- paths/export.sh@5 -- # export PATH 00:30:24.647 17:22:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.647 17:22:40 -- nvmf/common.sh@46 -- # : 0 00:30:24.647 17:22:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:24.647 17:22:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:24.647 17:22:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:24.647 17:22:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.647 17:22:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.647 17:22:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:24.647 17:22:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:24.647 17:22:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:24.647 17:22:40 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:24.647 17:22:40 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:24.647 17:22:40 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:24.647 17:22:40 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:24.647 17:22:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:24.647 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:24.647 17:22:40 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:24.647 17:22:40 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=670635 00:30:24.647 17:22:40 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:24.647 17:22:40 -- spdkcli/common.sh@34 -- # waitforlisten 670635 00:30:24.647 17:22:40 -- common/autotest_common.sh@819 -- # '[' -z 670635 ']' 00:30:24.647 17:22:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.647 17:22:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:24.647 17:22:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.647 17:22:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:24.647 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:24.647 [2024-07-20 17:22:40.596662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:24.647 [2024-07-20 17:22:40.596753] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670635 ] 00:30:24.647 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.647 [2024-07-20 17:22:40.659396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.647 [2024-07-20 17:22:40.748066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:24.647 [2024-07-20 17:22:40.748283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.647 [2024-07-20 17:22:40.748285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.580 17:22:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:25.580 17:22:41 -- common/autotest_common.sh@852 -- # return 0 00:30:25.580 17:22:41 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:25.580 17:22:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:25.580 17:22:41 -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 17:22:41 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:25.580 17:22:41 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:25.580 17:22:41 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:25.580 17:22:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:25.580 17:22:41 -- common/autotest_common.sh@10 -- # set +x 00:30:25.580 17:22:41 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:25.580 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:25.580 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:25.580 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:25.580 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:25.580 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:25.580 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:25.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:25.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:25.580 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:25.580 ' 00:30:26.147 [2024-07-20 17:22:42.016845] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:28.043 [2024-07-20 17:22:44.175115] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.411 [2024-07-20 17:22:45.399531] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:31.937 [2024-07-20 17:22:47.662803] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:33.833 [2024-07-20 17:22:49.617224] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:35.207 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:35.207 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:35.207 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:35.207 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:35.207 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:35.207 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:35.207 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:35.207 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:35.207 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:35.207 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:35.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:35.207 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:35.207 17:22:51 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:35.207 17:22:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:35.207 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:30:35.207 17:22:51 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:35.207 17:22:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:35.207 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:30:35.207 17:22:51 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:35.207 17:22:51 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:35.772 17:22:51 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:35.772 17:22:51 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:35.772 17:22:51 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:35.772 17:22:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:35.772 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:30:35.772 17:22:51 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:35.772 17:22:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:35.772 17:22:51 -- common/autotest_common.sh@10 -- # set +x 00:30:35.772 17:22:51 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:35.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:35.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:35.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:35.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:35.772 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:35.772 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:35.772 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:35.772 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:35.772 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:35.772 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:35.772 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:35.772 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:35.772 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:35.772 ' 00:30:41.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:41.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:41.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:41.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:41.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:41.036 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:41.036 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:41.036 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:41.036 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:41.036 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:41.036 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:41.036 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:41.036 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:41.036 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:41.036 17:22:56 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:41.036 17:22:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.036 17:22:56 -- common/autotest_common.sh@10 -- # set +x 00:30:41.036 17:22:56 -- spdkcli/nvmf.sh@90 -- # killprocess 670635 00:30:41.036 17:22:56 -- common/autotest_common.sh@926 -- # '[' -z 670635 ']' 00:30:41.036 17:22:56 -- common/autotest_common.sh@930 -- # kill -0 670635 00:30:41.036 17:22:56 -- common/autotest_common.sh@931 -- # uname 00:30:41.036 17:22:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:41.036 17:22:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 670635 00:30:41.036 17:22:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:41.036 17:22:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:41.036 17:22:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 670635' 00:30:41.036 killing process with pid 670635 00:30:41.036 17:22:56 -- common/autotest_common.sh@945 -- # kill 670635 00:30:41.036 [2024-07-20 17:22:56.950331] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:41.036 17:22:56 -- common/autotest_common.sh@950 -- # wait 670635 00:30:41.036 17:22:57 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:41.036 17:22:57 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:41.036 17:22:57 -- spdkcli/common.sh@13 -- # '[' -n 670635 ']' 00:30:41.036 17:22:57 -- spdkcli/common.sh@14 -- # killprocess 670635 00:30:41.036 17:22:57 -- common/autotest_common.sh@926 -- # '[' -z 670635 ']' 00:30:41.036 17:22:57 -- common/autotest_common.sh@930 -- # kill -0 670635 00:30:41.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (670635) - No such process 00:30:41.037 17:22:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 670635 is not found' 00:30:41.037 Process with pid 670635 is not found 00:30:41.037 17:22:57 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:41.037 17:22:57 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:41.037 17:22:57 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:41.037 00:30:41.037 real 0m16.683s 00:30:41.037 user 0m35.340s 00:30:41.037 sys 0m0.815s 00:30:41.037 17:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.037 17:22:57 -- common/autotest_common.sh@10 -- # set +x 00:30:41.037 ************************************ 00:30:41.037 END TEST spdkcli_nvmf_tcp 00:30:41.037 ************************************ 00:30:41.295 17:22:57 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:41.295 17:22:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:41.295 17:22:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.295 17:22:57 -- common/autotest_common.sh@10 -- # set +x 00:30:41.295 ************************************ 00:30:41.295 START TEST nvmf_identify_passthru 00:30:41.295 ************************************ 00:30:41.295 17:22:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:41.295 * Looking for test storage... 00:30:41.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:41.295 17:22:57 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:41.295 17:22:57 -- nvmf/common.sh@7 -- # uname -s 00:30:41.295 17:22:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:41.295 17:22:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:41.295 17:22:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:41.295 17:22:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:41.295 17:22:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:41.295 17:22:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:41.295 17:22:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:41.295 17:22:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:41.295 17:22:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:41.295 17:22:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:41.295 17:22:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.295 17:22:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:41.295 17:22:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:41.295 17:22:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:41.295 17:22:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:41.295 17:22:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.295 17:22:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.295 17:22:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.295 17:22:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.295 17:22:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.295 17:22:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- paths/export.sh@5 -- # export PATH 00:30:41.296 17:22:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- nvmf/common.sh@46 -- # : 0 00:30:41.296 17:22:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:41.296 17:22:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:41.296 17:22:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:41.296 17:22:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:41.296 17:22:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:41.296 17:22:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:41.296 17:22:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:41.296 17:22:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:41.296 17:22:57 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:41.296 17:22:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:41.296 17:22:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:41.296 17:22:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:41.296 17:22:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- paths/export.sh@5 -- # export PATH 00:30:41.296 17:22:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:41.296 17:22:57 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:41.296 17:22:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:41.296 17:22:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:41.296 17:22:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:41.296 17:22:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:41.296 17:22:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:41.296 17:22:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.296 17:22:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:41.296 17:22:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:41.296 17:22:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:41.296 17:22:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:41.296 17:22:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:41.296 17:22:57 -- common/autotest_common.sh@10 -- # set +x 00:30:43.198 17:22:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:43.198 17:22:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:43.198 17:22:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:43.198 17:22:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:43.198 17:22:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:43.198 17:22:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:43.198 17:22:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:43.198 17:22:59 -- nvmf/common.sh@294 -- # net_devs=() 00:30:43.198 17:22:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:43.198 17:22:59 -- nvmf/common.sh@295 -- # e810=() 00:30:43.198 17:22:59 -- nvmf/common.sh@295 -- # local -ga e810 00:30:43.198 17:22:59 -- nvmf/common.sh@296 -- # x722=() 00:30:43.198 17:22:59 -- nvmf/common.sh@296 -- # local -ga x722 00:30:43.198 17:22:59 -- nvmf/common.sh@297 -- # mlx=() 00:30:43.198 17:22:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:43.198 17:22:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.198 17:22:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:43.198 17:22:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:43.198 17:22:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:43.198 17:22:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:43.198 17:22:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:43.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:43.198 17:22:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:43.198 17:22:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:43.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:43.198 17:22:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:43.198 17:22:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:43.198 17:22:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.198 17:22:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:43.198 17:22:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.198 17:22:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:43.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:43.198 17:22:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.198 17:22:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:43.198 17:22:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.198 17:22:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:43.198 17:22:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.198 17:22:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:43.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:43.198 17:22:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.198 17:22:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:43.198 17:22:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:43.198 17:22:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:43.198 17:22:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:43.198 17:22:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:43.198 17:22:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.198 17:22:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:43.198 17:22:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:43.198 17:22:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:43.198 17:22:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:43.198 17:22:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:43.198 17:22:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:43.198 17:22:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.198 17:22:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:43.198 17:22:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:43.198 17:22:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:43.198 17:22:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:43.198 17:22:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:43.198 17:22:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:43.198 17:22:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:43.198 17:22:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:43.198 17:22:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:43.198 17:22:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:43.198 17:22:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:43.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:30:43.198 00:30:43.198 --- 10.0.0.2 ping statistics --- 00:30:43.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.198 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:30:43.199 17:22:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:43.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:43.199 00:30:43.199 --- 10.0.0.1 ping statistics --- 00:30:43.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.199 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:43.199 17:22:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.199 17:22:59 -- nvmf/common.sh@410 -- # return 0 00:30:43.199 17:22:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:43.199 17:22:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.199 17:22:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:43.199 17:22:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:43.199 17:22:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.199 17:22:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:43.199 17:22:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:43.199 17:22:59 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:43.199 17:22:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:43.199 17:22:59 -- common/autotest_common.sh@10 -- # set +x 00:30:43.199 17:22:59 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:43.199 17:22:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:43.199 17:22:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:43.199 17:22:59 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:43.199 17:22:59 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:43.199 17:22:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:43.199 17:22:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:43.199 17:22:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:43.199 17:22:59 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:43.199 17:22:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:43.199 17:22:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:43.199 17:22:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:43.199 17:22:59 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:43.199 17:22:59 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:43.199 17:22:59 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:43.199 17:22:59 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:43.199 17:22:59 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:43.199 17:22:59 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:43.457 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.680 17:23:03 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:47.680 17:23:03 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:47.680 17:23:03 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:47.680 17:23:03 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:47.680 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.901 17:23:07 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:51.901 17:23:07 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:51.901 17:23:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:51.901 17:23:07 -- common/autotest_common.sh@10 -- # set +x 00:30:51.901 17:23:07 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:51.901 17:23:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:51.901 17:23:07 -- common/autotest_common.sh@10 -- # set +x 00:30:51.901 17:23:07 -- target/identify_passthru.sh@31 -- # nvmfpid=675364 00:30:51.901 17:23:07 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:51.901 17:23:07 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.901 17:23:07 -- target/identify_passthru.sh@35 -- # waitforlisten 675364 00:30:51.901 17:23:07 -- common/autotest_common.sh@819 -- # '[' -z 675364 ']' 00:30:51.901 17:23:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.901 17:23:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:51.901 17:23:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.901 17:23:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:51.901 17:23:07 -- common/autotest_common.sh@10 -- # set +x 00:30:51.901 [2024-07-20 17:23:07.767989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:51.901 [2024-07-20 17:23:07.768077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.901 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.901 [2024-07-20 17:23:07.837489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.901 [2024-07-20 17:23:07.925510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:51.901 [2024-07-20 17:23:07.925649] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.901 [2024-07-20 17:23:07.925667] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.901 [2024-07-20 17:23:07.925679] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.901 [2024-07-20 17:23:07.925730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.901 [2024-07-20 17:23:07.925791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.901 [2024-07-20 17:23:07.925857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.901 [2024-07-20 17:23:07.925860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.901 17:23:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:51.901 17:23:07 -- common/autotest_common.sh@852 -- # return 0 00:30:51.901 17:23:07 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:51.901 17:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.901 17:23:07 -- common/autotest_common.sh@10 -- # set +x 00:30:51.901 INFO: Log level set to 20 00:30:51.901 INFO: Requests: 00:30:51.901 { 00:30:51.901 "jsonrpc": "2.0", 00:30:51.901 "method": "nvmf_set_config", 00:30:51.901 "id": 1, 00:30:51.901 "params": { 00:30:51.901 "admin_cmd_passthru": { 00:30:51.901 "identify_ctrlr": true 00:30:51.901 } 00:30:51.901 } 00:30:51.901 } 00:30:51.901 00:30:51.901 INFO: response: 00:30:51.901 { 00:30:51.901 "jsonrpc": "2.0", 00:30:51.901 "id": 1, 00:30:51.901 "result": true 00:30:51.901 } 00:30:51.901 00:30:51.901 17:23:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.901 17:23:07 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:51.901 17:23:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.901 17:23:07 -- common/autotest_common.sh@10 -- # set +x 00:30:51.901 INFO: Setting log level to 20 00:30:51.901 INFO: Setting log level to 20 00:30:51.901 INFO: Log level set to 20 00:30:51.901 INFO: Log level set to 20 00:30:51.901 INFO: Requests: 00:30:51.901 { 00:30:51.901 "jsonrpc": "2.0", 00:30:51.901 "method": "framework_start_init", 00:30:51.901 "id": 1 00:30:51.901 } 00:30:51.901 00:30:51.901 INFO: Requests: 00:30:51.901 { 00:30:51.901 "jsonrpc": "2.0", 00:30:51.901 "method": "framework_start_init", 00:30:51.901 "id": 1 00:30:51.901 } 00:30:51.901 00:30:52.159 [2024-07-20 17:23:08.087138] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:52.159 INFO: response: 00:30:52.159 { 00:30:52.159 "jsonrpc": "2.0", 00:30:52.159 "id": 1, 00:30:52.159 "result": true 00:30:52.159 } 00:30:52.159 00:30:52.159 INFO: response: 00:30:52.159 { 00:30:52.159 "jsonrpc": "2.0", 00:30:52.159 "id": 1, 00:30:52.159 "result": true 00:30:52.159 } 00:30:52.159 00:30:52.159 17:23:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.160 17:23:08 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.160 17:23:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.160 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:30:52.160 INFO: Setting log level to 40 00:30:52.160 INFO: Setting log level to 40 00:30:52.160 INFO: Setting log level to 40 00:30:52.160 [2024-07-20 17:23:08.097269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.160 17:23:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:52.160 17:23:08 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:52.160 17:23:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:52.160 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:30:52.160 17:23:08 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:52.160 17:23:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:52.160 17:23:08 -- common/autotest_common.sh@10 -- # set +x 00:30:55.435 Nvme0n1 00:30:55.435 17:23:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.435 17:23:10 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:55.435 17:23:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.435 17:23:10 -- common/autotest_common.sh@10 -- # set +x 00:30:55.435 17:23:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.435 17:23:10 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:55.435 17:23:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.435 17:23:10 -- common/autotest_common.sh@10 -- # set +x 00:30:55.435 17:23:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.435 17:23:10 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.435 17:23:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.435 17:23:10 -- common/autotest_common.sh@10 -- # set +x 00:30:55.435 [2024-07-20 17:23:10.987865] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.435 17:23:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.435 17:23:10 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:55.435 17:23:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.435 17:23:10 -- common/autotest_common.sh@10 -- # set +x 00:30:55.435 [2024-07-20 17:23:10.995570] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:55.435 [ 00:30:55.435 { 00:30:55.435 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:55.435 "subtype": "Discovery", 00:30:55.435 "listen_addresses": [], 00:30:55.435 "allow_any_host": true, 00:30:55.435 "hosts": [] 00:30:55.435 }, 00:30:55.435 { 00:30:55.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.435 "subtype": "NVMe", 00:30:55.435 "listen_addresses": [ 00:30:55.435 { 00:30:55.435 "transport": "TCP", 00:30:55.435 "trtype": "TCP", 00:30:55.435 "adrfam": "IPv4", 00:30:55.435 "traddr": "10.0.0.2", 00:30:55.435 "trsvcid": "4420" 00:30:55.435 } 00:30:55.435 ], 00:30:55.435 "allow_any_host": true, 00:30:55.435 "hosts": [], 00:30:55.435 "serial_number": "SPDK00000000000001", 00:30:55.435 "model_number": "SPDK bdev Controller", 00:30:55.435 "max_namespaces": 1, 00:30:55.435 "min_cntlid": 1, 00:30:55.435 "max_cntlid": 65519, 00:30:55.435 "namespaces": [ 00:30:55.435 { 00:30:55.435 "nsid": 1, 00:30:55.435 "bdev_name": "Nvme0n1", 00:30:55.435 "name": "Nvme0n1", 00:30:55.435 "nguid": "E7227BDBF2A2421DB6EDA7AB8D03D5C8", 00:30:55.435 "uuid": "e7227bdb-f2a2-421d-b6ed-a7ab8d03d5c8" 00:30:55.435 } 00:30:55.435 ] 00:30:55.435 } 00:30:55.436 ] 00:30:55.436 17:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.436 17:23:11 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:55.436 17:23:11 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:55.436 17:23:11 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:55.436 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.436 17:23:11 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:55.436 17:23:11 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:55.436 17:23:11 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:55.436 17:23:11 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:55.436 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.436 17:23:11 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:55.436 17:23:11 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:55.436 17:23:11 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:55.436 17:23:11 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:55.436 17:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:55.436 17:23:11 -- common/autotest_common.sh@10 -- # set +x 00:30:55.436 17:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:55.436 17:23:11 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:55.436 17:23:11 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:55.436 17:23:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:55.436 17:23:11 -- nvmf/common.sh@116 -- # sync 00:30:55.436 17:23:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:55.436 17:23:11 -- nvmf/common.sh@119 -- # set +e 00:30:55.436 17:23:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:55.436 17:23:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:55.436 rmmod nvme_tcp 00:30:55.436 rmmod nvme_fabrics 00:30:55.436 rmmod nvme_keyring 00:30:55.436 17:23:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:55.436 17:23:11 -- nvmf/common.sh@123 -- # set -e 00:30:55.436 17:23:11 -- nvmf/common.sh@124 -- # return 0 00:30:55.436 17:23:11 -- nvmf/common.sh@477 -- # '[' -n 675364 ']' 00:30:55.436 17:23:11 -- nvmf/common.sh@478 -- # killprocess 675364 00:30:55.436 17:23:11 -- common/autotest_common.sh@926 -- # '[' -z 675364 ']' 00:30:55.436 17:23:11 -- common/autotest_common.sh@930 -- # kill -0 675364 00:30:55.436 17:23:11 -- common/autotest_common.sh@931 -- # uname 00:30:55.436 17:23:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:55.436 17:23:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 675364 00:30:55.436 17:23:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:55.436 17:23:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:55.436 17:23:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 675364' 00:30:55.436 killing process with pid 675364 00:30:55.436 17:23:11 -- common/autotest_common.sh@945 -- # kill 675364 00:30:55.436 [2024-07-20 17:23:11.339291] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:55.436 17:23:11 -- common/autotest_common.sh@950 -- # wait 675364 00:30:56.806 17:23:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:56.806 17:23:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:56.806 17:23:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:56.806 17:23:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:56.806 17:23:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:56.806 17:23:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.806 17:23:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:56.806 17:23:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.332 17:23:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:59.332 00:30:59.332 real 0m17.710s 00:30:59.332 user 0m26.117s 00:30:59.332 sys 0m2.227s 00:30:59.332 17:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:59.332 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:30:59.332 ************************************ 00:30:59.332 END TEST nvmf_identify_passthru 00:30:59.332 ************************************ 00:30:59.332 17:23:14 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:59.332 17:23:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:59.332 17:23:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:59.332 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:30:59.332 ************************************ 00:30:59.332 START TEST nvmf_dif 00:30:59.332 ************************************ 00:30:59.332 17:23:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:59.332 * Looking for test storage... 00:30:59.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.332 17:23:14 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.332 17:23:14 -- nvmf/common.sh@7 -- # uname -s 00:30:59.332 17:23:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.332 17:23:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.332 17:23:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.332 17:23:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.332 17:23:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.332 17:23:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.332 17:23:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.332 17:23:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.332 17:23:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.332 17:23:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.332 17:23:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.332 17:23:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:59.332 17:23:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.332 17:23:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.332 17:23:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.332 17:23:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.332 17:23:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.332 17:23:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.332 17:23:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.332 17:23:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.332 17:23:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.332 17:23:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.332 17:23:14 -- paths/export.sh@5 -- # export PATH 00:30:59.332 17:23:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.332 17:23:14 -- nvmf/common.sh@46 -- # : 0 00:30:59.332 17:23:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:59.332 17:23:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:59.332 17:23:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:59.332 17:23:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.332 17:23:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.332 17:23:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:59.332 17:23:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:59.332 17:23:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:59.332 17:23:14 -- target/dif.sh@15 -- # NULL_META=16 00:30:59.332 17:23:14 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:59.332 17:23:14 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:59.332 17:23:14 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:59.332 17:23:14 -- target/dif.sh@135 -- # nvmftestinit 00:30:59.332 17:23:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:59.332 17:23:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.332 17:23:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:59.332 17:23:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:59.332 17:23:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:59.332 17:23:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.332 17:23:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:59.332 17:23:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.332 17:23:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:59.332 17:23:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:59.332 17:23:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:59.332 17:23:15 -- common/autotest_common.sh@10 -- # set +x 00:31:01.232 17:23:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:01.232 17:23:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:01.232 17:23:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:01.232 17:23:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:01.232 17:23:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:01.232 17:23:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:01.232 17:23:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:01.232 17:23:16 -- nvmf/common.sh@294 -- # net_devs=() 00:31:01.233 17:23:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:01.233 17:23:16 -- nvmf/common.sh@295 -- # e810=() 00:31:01.233 17:23:16 -- nvmf/common.sh@295 -- # local -ga e810 00:31:01.233 17:23:16 -- nvmf/common.sh@296 -- # x722=() 00:31:01.233 17:23:16 -- nvmf/common.sh@296 -- # local -ga x722 00:31:01.233 17:23:16 -- nvmf/common.sh@297 -- # mlx=() 00:31:01.233 17:23:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:01.233 17:23:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:01.233 17:23:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:01.233 17:23:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:01.233 17:23:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:01.233 17:23:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.233 17:23:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:01.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:01.233 17:23:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:01.233 17:23:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:01.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:01.233 17:23:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:01.233 17:23:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.233 17:23:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.233 17:23:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.233 17:23:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.233 17:23:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:01.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:01.233 17:23:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.233 17:23:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:01.233 17:23:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:01.233 17:23:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:01.233 17:23:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:01.233 17:23:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:01.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:01.233 17:23:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:01.233 17:23:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:01.233 17:23:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:01.233 17:23:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:01.233 17:23:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:01.233 17:23:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:01.233 17:23:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:01.233 17:23:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:01.233 17:23:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:01.233 17:23:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:01.233 17:23:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:01.233 17:23:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:01.233 17:23:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:01.233 17:23:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:01.233 17:23:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:01.233 17:23:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:01.233 17:23:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:01.233 17:23:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:01.233 17:23:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:01.233 17:23:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:01.233 17:23:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:01.233 17:23:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:01.233 17:23:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:01.233 17:23:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:01.233 17:23:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:01.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:01.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:31:01.233 00:31:01.233 --- 10.0.0.2 ping statistics --- 00:31:01.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.233 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:31:01.233 17:23:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:01.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:01.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:31:01.233 00:31:01.233 --- 10.0.0.1 ping statistics --- 00:31:01.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:01.233 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:31:01.233 17:23:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:01.233 17:23:17 -- nvmf/common.sh@410 -- # return 0 00:31:01.233 17:23:17 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:01.233 17:23:17 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:02.182 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:02.182 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:02.182 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:02.182 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:02.182 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:02.182 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:02.182 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:02.182 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:02.182 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:02.182 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:02.182 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:02.182 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:02.182 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:02.182 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:02.182 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:02.182 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:02.182 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:02.440 17:23:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:02.440 17:23:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:02.440 17:23:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:02.440 17:23:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:02.440 17:23:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:02.440 17:23:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:02.440 17:23:18 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:02.440 17:23:18 -- target/dif.sh@137 -- # nvmfappstart 00:31:02.440 17:23:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:02.440 17:23:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:02.440 17:23:18 -- common/autotest_common.sh@10 -- # set +x 00:31:02.440 17:23:18 -- nvmf/common.sh@469 -- # nvmfpid=678672 00:31:02.440 17:23:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:02.440 17:23:18 -- nvmf/common.sh@470 -- # waitforlisten 678672 00:31:02.440 17:23:18 -- common/autotest_common.sh@819 -- # '[' -z 678672 ']' 00:31:02.440 17:23:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:02.440 17:23:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:02.440 17:23:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:02.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:02.440 17:23:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:02.440 17:23:18 -- common/autotest_common.sh@10 -- # set +x 00:31:02.440 [2024-07-20 17:23:18.409157] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:02.440 [2024-07-20 17:23:18.409229] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:02.440 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.440 [2024-07-20 17:23:18.476490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.440 [2024-07-20 17:23:18.559325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:02.440 [2024-07-20 17:23:18.559468] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:02.440 [2024-07-20 17:23:18.559484] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:02.440 [2024-07-20 17:23:18.559497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:02.440 [2024-07-20 17:23:18.559525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.372 17:23:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:03.372 17:23:19 -- common/autotest_common.sh@852 -- # return 0 00:31:03.372 17:23:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:03.372 17:23:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 17:23:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.372 17:23:19 -- target/dif.sh@139 -- # create_transport 00:31:03.372 17:23:19 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:03.372 17:23:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 [2024-07-20 17:23:19.382871] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.372 17:23:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.372 17:23:19 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:03.372 17:23:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:03.372 17:23:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 ************************************ 00:31:03.372 START TEST fio_dif_1_default 00:31:03.372 ************************************ 00:31:03.372 17:23:19 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:31:03.372 17:23:19 -- target/dif.sh@86 -- # create_subsystems 0 00:31:03.372 17:23:19 -- target/dif.sh@28 -- # local sub 00:31:03.372 17:23:19 -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.372 17:23:19 -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.372 17:23:19 -- target/dif.sh@18 -- # local sub_id=0 00:31:03.372 17:23:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:03.372 17:23:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 bdev_null0 00:31:03.372 17:23:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.372 17:23:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.372 17:23:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 17:23:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.372 17:23:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.372 17:23:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 17:23:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.372 17:23:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.372 17:23:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:03.372 17:23:19 -- common/autotest_common.sh@10 -- # set +x 00:31:03.372 [2024-07-20 17:23:19.419136] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.372 17:23:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:03.372 17:23:19 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:03.372 17:23:19 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:03.372 17:23:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:03.372 17:23:19 -- nvmf/common.sh@520 -- # config=() 00:31:03.372 17:23:19 -- nvmf/common.sh@520 -- # local subsystem config 00:31:03.372 17:23:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:03.372 17:23:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.372 17:23:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:03.372 { 00:31:03.372 "params": { 00:31:03.372 "name": "Nvme$subsystem", 00:31:03.372 "trtype": "$TEST_TRANSPORT", 00:31:03.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.372 "adrfam": "ipv4", 00:31:03.372 "trsvcid": "$NVMF_PORT", 00:31:03.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.372 "hdgst": ${hdgst:-false}, 00:31:03.372 "ddgst": ${ddgst:-false} 00:31:03.372 }, 00:31:03.372 "method": "bdev_nvme_attach_controller" 00:31:03.372 } 00:31:03.372 EOF 00:31:03.372 )") 00:31:03.372 17:23:19 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.372 17:23:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:03.372 17:23:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.372 17:23:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:03.372 17:23:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.372 17:23:19 -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.372 17:23:19 -- common/autotest_common.sh@1320 -- # shift 00:31:03.372 17:23:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:03.372 17:23:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.372 17:23:19 -- target/dif.sh@54 -- # local file 00:31:03.372 17:23:19 -- target/dif.sh@56 -- # cat 00:31:03.372 17:23:19 -- nvmf/common.sh@542 -- # cat 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:03.372 17:23:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.372 17:23:19 -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.372 17:23:19 -- nvmf/common.sh@544 -- # jq . 00:31:03.372 17:23:19 -- nvmf/common.sh@545 -- # IFS=, 00:31:03.372 17:23:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:03.372 "params": { 00:31:03.372 "name": "Nvme0", 00:31:03.372 "trtype": "tcp", 00:31:03.372 "traddr": "10.0.0.2", 00:31:03.372 "adrfam": "ipv4", 00:31:03.372 "trsvcid": "4420", 00:31:03.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.372 "hdgst": false, 00:31:03.372 "ddgst": false 00:31:03.372 }, 00:31:03.372 "method": "bdev_nvme_attach_controller" 00:31:03.372 }' 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:03.372 17:23:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:03.372 17:23:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:03.372 17:23:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:03.372 17:23:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:03.372 17:23:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.372 17:23:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.630 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:03.630 fio-3.35 00:31:03.630 Starting 1 thread 00:31:03.630 EAL: No free 2048 kB hugepages reported on node 1 00:31:04.195 [2024-07-20 17:23:20.133942] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:04.195 [2024-07-20 17:23:20.134013] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:14.174 00:31:14.174 filename0: (groupid=0, jobs=1): err= 0: pid=678915: Sat Jul 20 17:23:30 2024 00:31:14.174 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:31:14.174 slat (nsec): min=6600, max=70293, avg=8646.32, stdev=4225.75 00:31:14.174 clat (usec): min=41831, max=44861, avg=41993.40, stdev=210.21 00:31:14.174 lat (usec): min=41839, max=44880, avg=42002.05, stdev=210.41 00:31:14.174 clat percentiles (usec): 00:31:14.174 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:14.174 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:14.174 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:14.174 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:31:14.174 | 99.99th=[44827] 00:31:14.174 bw ( KiB/s): min= 352, max= 384, per=99.80%, avg=380.63, stdev=10.09, samples=19 00:31:14.174 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:14.174 lat (msec) : 50=100.00% 00:31:14.174 cpu : usr=90.67%, sys=9.05%, ctx=23, majf=0, minf=260 00:31:14.174 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.174 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.174 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:14.174 00:31:14.174 Run status group 0 (all jobs): 00:31:14.174 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10001-10001msec 00:31:14.431 17:23:30 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:14.431 17:23:30 -- target/dif.sh@43 -- # local sub 00:31:14.431 17:23:30 -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.431 17:23:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.431 17:23:30 -- target/dif.sh@36 -- # local sub_id=0 00:31:14.431 17:23:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.431 17:23:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.431 00:31:14.431 real 0m11.139s 00:31:14.431 user 0m10.106s 00:31:14.431 sys 0m1.209s 00:31:14.431 17:23:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 ************************************ 00:31:14.431 END TEST fio_dif_1_default 00:31:14.431 ************************************ 00:31:14.431 17:23:30 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:14.431 17:23:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:14.431 17:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 ************************************ 00:31:14.431 START TEST fio_dif_1_multi_subsystems 00:31:14.431 ************************************ 00:31:14.431 17:23:30 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:14.431 17:23:30 -- target/dif.sh@92 -- # local files=1 00:31:14.431 17:23:30 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:14.431 17:23:30 -- target/dif.sh@28 -- # local sub 00:31:14.431 17:23:30 -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.431 17:23:30 -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.431 17:23:30 -- target/dif.sh@18 -- # local sub_id=0 00:31:14.431 17:23:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 bdev_null0 00:31:14.431 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.431 17:23:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.431 17:23:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.431 17:23:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.431 [2024-07-20 17:23:30.581687] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.431 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.431 17:23:30 -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.431 17:23:30 -- target/dif.sh@31 -- # create_subsystem 1 00:31:14.431 17:23:30 -- target/dif.sh@18 -- # local sub_id=1 00:31:14.431 17:23:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:14.431 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.431 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.689 bdev_null1 00:31:14.689 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.689 17:23:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:14.689 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.689 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.689 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.689 17:23:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:14.689 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.689 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.689 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.689 17:23:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:14.689 17:23:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.689 17:23:30 -- common/autotest_common.sh@10 -- # set +x 00:31:14.689 17:23:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.689 17:23:30 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:14.689 17:23:30 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:14.689 17:23:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:14.689 17:23:30 -- nvmf/common.sh@520 -- # config=() 00:31:14.689 17:23:30 -- nvmf/common.sh@520 -- # local subsystem config 00:31:14.689 17:23:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:14.689 17:23:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.689 17:23:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:14.689 { 00:31:14.689 "params": { 00:31:14.689 "name": "Nvme$subsystem", 00:31:14.689 "trtype": "$TEST_TRANSPORT", 00:31:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.689 "adrfam": "ipv4", 00:31:14.689 "trsvcid": "$NVMF_PORT", 00:31:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.689 "hdgst": ${hdgst:-false}, 00:31:14.689 "ddgst": ${ddgst:-false} 00:31:14.689 }, 00:31:14.689 "method": "bdev_nvme_attach_controller" 00:31:14.689 } 00:31:14.689 EOF 00:31:14.689 )") 00:31:14.689 17:23:30 -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.689 17:23:30 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.689 17:23:30 -- target/dif.sh@54 -- # local file 00:31:14.689 17:23:30 -- target/dif.sh@56 -- # cat 00:31:14.689 17:23:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:14.689 17:23:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.689 17:23:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:14.689 17:23:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.689 17:23:30 -- common/autotest_common.sh@1320 -- # shift 00:31:14.689 17:23:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:14.689 17:23:30 -- nvmf/common.sh@542 -- # cat 00:31:14.689 17:23:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.689 17:23:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.689 17:23:30 -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.689 17:23:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.689 17:23:30 -- target/dif.sh@73 -- # cat 00:31:14.689 17:23:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:14.689 17:23:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:14.689 17:23:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:14.689 17:23:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:14.689 { 00:31:14.689 "params": { 00:31:14.689 "name": "Nvme$subsystem", 00:31:14.689 "trtype": "$TEST_TRANSPORT", 00:31:14.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.689 "adrfam": "ipv4", 00:31:14.689 "trsvcid": "$NVMF_PORT", 00:31:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.689 "hdgst": ${hdgst:-false}, 00:31:14.689 "ddgst": ${ddgst:-false} 00:31:14.689 }, 00:31:14.689 "method": "bdev_nvme_attach_controller" 00:31:14.689 } 00:31:14.689 EOF 00:31:14.689 )") 00:31:14.689 17:23:30 -- nvmf/common.sh@542 -- # cat 00:31:14.689 17:23:30 -- target/dif.sh@72 -- # (( file++ )) 00:31:14.689 17:23:30 -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.689 17:23:30 -- nvmf/common.sh@544 -- # jq . 00:31:14.689 17:23:30 -- nvmf/common.sh@545 -- # IFS=, 00:31:14.689 17:23:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:14.689 "params": { 00:31:14.689 "name": "Nvme0", 00:31:14.689 "trtype": "tcp", 00:31:14.689 "traddr": "10.0.0.2", 00:31:14.689 "adrfam": "ipv4", 00:31:14.689 "trsvcid": "4420", 00:31:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.689 "hdgst": false, 00:31:14.689 "ddgst": false 00:31:14.689 }, 00:31:14.689 "method": "bdev_nvme_attach_controller" 00:31:14.689 },{ 00:31:14.689 "params": { 00:31:14.689 "name": "Nvme1", 00:31:14.689 "trtype": "tcp", 00:31:14.689 "traddr": "10.0.0.2", 00:31:14.689 "adrfam": "ipv4", 00:31:14.689 "trsvcid": "4420", 00:31:14.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:14.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:14.689 "hdgst": false, 00:31:14.689 "ddgst": false 00:31:14.689 }, 00:31:14.689 "method": "bdev_nvme_attach_controller" 00:31:14.690 }' 00:31:14.690 17:23:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:14.690 17:23:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:14.690 17:23:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.690 17:23:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.690 17:23:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:14.690 17:23:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:14.690 17:23:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:14.690 17:23:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:14.690 17:23:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:14.690 17:23:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.954 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.954 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:14.954 fio-3.35 00:31:14.954 Starting 2 threads 00:31:14.954 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.214 [2024-07-20 17:23:31.366875] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:15.214 [2024-07-20 17:23:31.366946] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:27.413 00:31:27.413 filename0: (groupid=0, jobs=1): err= 0: pid=680356: Sat Jul 20 17:23:41 2024 00:31:27.413 read: IOPS=183, BW=735KiB/s (752kB/s)(7360KiB/10017msec) 00:31:27.413 slat (nsec): min=7363, max=24727, avg=9320.63, stdev=2449.43 00:31:27.413 clat (usec): min=1135, max=43160, avg=21747.23, stdev=20263.77 00:31:27.413 lat (usec): min=1143, max=43172, avg=21756.55, stdev=20263.74 00:31:27.413 clat percentiles (usec): 00:31:27.413 | 1.00th=[ 1156], 5.00th=[ 1205], 10.00th=[ 1221], 20.00th=[ 1270], 00:31:27.413 | 30.00th=[ 1336], 40.00th=[ 1369], 50.00th=[41157], 60.00th=[41681], 00:31:27.413 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:31:27.413 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:27.413 | 99.99th=[43254] 00:31:27.413 bw ( KiB/s): min= 672, max= 768, per=50.50%, avg=734.40, stdev=33.60, samples=20 00:31:27.413 iops : min= 168, max= 192, avg=183.60, stdev= 8.40, samples=20 00:31:27.413 lat (msec) : 2=49.57%, 50=50.43% 00:31:27.413 cpu : usr=94.09%, sys=5.61%, ctx=21, majf=0, minf=139 00:31:27.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.413 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:27.413 filename1: (groupid=0, jobs=1): err= 0: pid=680357: Sat Jul 20 17:23:41 2024 00:31:27.413 read: IOPS=179, BW=720KiB/s (737kB/s)(7200KiB/10004msec) 00:31:27.413 slat (nsec): min=7279, max=46725, avg=10808.02, stdev=6083.56 00:31:27.413 clat (usec): min=1353, max=43926, avg=22192.27, stdev=20425.04 00:31:27.413 lat (usec): min=1361, max=43944, avg=22203.08, stdev=20424.85 00:31:27.413 clat percentiles (usec): 00:31:27.413 | 1.00th=[ 1385], 5.00th=[ 1434], 10.00th=[ 1450], 20.00th=[ 1467], 00:31:27.413 | 30.00th=[ 1500], 40.00th=[ 1532], 50.00th=[41681], 60.00th=[42206], 00:31:27.413 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:31:27.413 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:31:27.413 | 99.99th=[43779] 00:31:27.413 bw ( KiB/s): min= 640, max= 768, per=49.40%, avg=718.40, stdev=36.67, samples=20 00:31:27.413 iops : min= 160, max= 192, avg=179.60, stdev= 9.17, samples=20 00:31:27.413 lat (msec) : 2=49.22%, 4=0.11%, 50=50.67% 00:31:27.413 cpu : usr=95.15%, sys=4.55%, ctx=12, majf=0, minf=175 00:31:27.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:27.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.413 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:27.413 00:31:27.413 Run status group 0 (all jobs): 00:31:27.413 READ: bw=1454KiB/s (1488kB/s), 720KiB/s-735KiB/s (737kB/s-752kB/s), io=14.2MiB (14.9MB), run=10004-10017msec 00:31:27.413 17:23:41 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:27.413 17:23:41 -- target/dif.sh@43 -- # local sub 00:31:27.413 17:23:41 -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.413 17:23:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:27.413 17:23:41 -- target/dif.sh@36 -- # local sub_id=0 00:31:27.413 17:23:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@45 -- # for sub in "$@" 00:31:27.413 17:23:41 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:27.413 17:23:41 -- target/dif.sh@36 -- # local sub_id=1 00:31:27.413 17:23:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 00:31:27.413 real 0m11.159s 00:31:27.413 user 0m20.031s 00:31:27.413 sys 0m1.303s 00:31:27.413 17:23:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 ************************************ 00:31:27.413 END TEST fio_dif_1_multi_subsystems 00:31:27.413 ************************************ 00:31:27.413 17:23:41 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:27.413 17:23:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:27.413 17:23:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 ************************************ 00:31:27.413 START TEST fio_dif_rand_params 00:31:27.413 ************************************ 00:31:27.413 17:23:41 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:27.413 17:23:41 -- target/dif.sh@100 -- # local NULL_DIF 00:31:27.413 17:23:41 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:27.413 17:23:41 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:27.413 17:23:41 -- target/dif.sh@103 -- # bs=128k 00:31:27.413 17:23:41 -- target/dif.sh@103 -- # numjobs=3 00:31:27.413 17:23:41 -- target/dif.sh@103 -- # iodepth=3 00:31:27.413 17:23:41 -- target/dif.sh@103 -- # runtime=5 00:31:27.413 17:23:41 -- target/dif.sh@105 -- # create_subsystems 0 00:31:27.413 17:23:41 -- target/dif.sh@28 -- # local sub 00:31:27.413 17:23:41 -- target/dif.sh@30 -- # for sub in "$@" 00:31:27.413 17:23:41 -- target/dif.sh@31 -- # create_subsystem 0 00:31:27.413 17:23:41 -- target/dif.sh@18 -- # local sub_id=0 00:31:27.413 17:23:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 bdev_null0 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:27.413 17:23:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:27.413 17:23:41 -- common/autotest_common.sh@10 -- # set +x 00:31:27.413 [2024-07-20 17:23:41.764042] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:27.413 17:23:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:27.413 17:23:41 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:27.413 17:23:41 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:27.413 17:23:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:27.413 17:23:41 -- nvmf/common.sh@520 -- # config=() 00:31:27.413 17:23:41 -- nvmf/common.sh@520 -- # local subsystem config 00:31:27.413 17:23:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:27.413 17:23:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.413 17:23:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:27.413 { 00:31:27.413 "params": { 00:31:27.413 "name": "Nvme$subsystem", 00:31:27.413 "trtype": "$TEST_TRANSPORT", 00:31:27.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:27.413 "adrfam": "ipv4", 00:31:27.413 "trsvcid": "$NVMF_PORT", 00:31:27.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:27.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:27.413 "hdgst": ${hdgst:-false}, 00:31:27.413 "ddgst": ${ddgst:-false} 00:31:27.413 }, 00:31:27.413 "method": "bdev_nvme_attach_controller" 00:31:27.413 } 00:31:27.413 EOF 00:31:27.413 )") 00:31:27.413 17:23:41 -- target/dif.sh@82 -- # gen_fio_conf 00:31:27.413 17:23:41 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.413 17:23:41 -- target/dif.sh@54 -- # local file 00:31:27.413 17:23:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:27.413 17:23:41 -- target/dif.sh@56 -- # cat 00:31:27.413 17:23:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:27.413 17:23:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:27.413 17:23:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.413 17:23:41 -- common/autotest_common.sh@1320 -- # shift 00:31:27.413 17:23:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:27.413 17:23:41 -- nvmf/common.sh@542 -- # cat 00:31:27.413 17:23:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.413 17:23:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:27.414 17:23:41 -- target/dif.sh@72 -- # (( file <= files )) 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:27.414 17:23:41 -- nvmf/common.sh@544 -- # jq . 00:31:27.414 17:23:41 -- nvmf/common.sh@545 -- # IFS=, 00:31:27.414 17:23:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:27.414 "params": { 00:31:27.414 "name": "Nvme0", 00:31:27.414 "trtype": "tcp", 00:31:27.414 "traddr": "10.0.0.2", 00:31:27.414 "adrfam": "ipv4", 00:31:27.414 "trsvcid": "4420", 00:31:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.414 "hdgst": false, 00:31:27.414 "ddgst": false 00:31:27.414 }, 00:31:27.414 "method": "bdev_nvme_attach_controller" 00:31:27.414 }' 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:27.414 17:23:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:27.414 17:23:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:27.414 17:23:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:27.414 17:23:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:27.414 17:23:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:27.414 17:23:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:27.414 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:27.414 ... 00:31:27.414 fio-3.35 00:31:27.414 Starting 3 threads 00:31:27.414 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.414 [2024-07-20 17:23:42.430731] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:27.414 [2024-07-20 17:23:42.430825] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:31.610 00:31:31.610 filename0: (groupid=0, jobs=1): err= 0: pid=681787: Sat Jul 20 17:23:47 2024 00:31:31.610 read: IOPS=177, BW=22.1MiB/s (23.2MB/s)(111MiB/5006msec) 00:31:31.610 slat (nsec): min=6993, max=73639, avg=12808.81, stdev=4511.66 00:31:31.610 clat (usec): min=7799, max=94877, avg=16909.65, stdev=11796.88 00:31:31.610 lat (usec): min=7811, max=94888, avg=16922.46, stdev=11796.84 00:31:31.610 clat percentiles (usec): 00:31:31.610 | 1.00th=[ 8029], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10290], 00:31:31.610 | 30.00th=[11863], 40.00th=[14091], 50.00th=[14877], 60.00th=[15533], 00:31:31.610 | 70.00th=[16188], 80.00th=[16909], 90.00th=[18220], 95.00th=[54789], 00:31:31.610 | 99.00th=[57410], 99.50th=[58459], 99.90th=[94897], 99.95th=[94897], 00:31:31.610 | 99.99th=[94897] 00:31:31.610 bw ( KiB/s): min=16929, max=28416, per=33.08%, avg=22633.70, stdev=3868.00, samples=10 00:31:31.610 iops : min= 132, max= 222, avg=176.80, stdev=30.26, samples=10 00:31:31.610 lat (msec) : 10=15.33%, 20=76.78%, 50=0.56%, 100=7.33% 00:31:31.610 cpu : usr=94.01%, sys=5.00%, ctx=13, majf=0, minf=146 00:31:31.610 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.610 issued rwts: total=887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.610 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.610 filename0: (groupid=0, jobs=1): err= 0: pid=681788: Sat Jul 20 17:23:47 2024 00:31:31.610 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5049msec) 00:31:31.610 slat (nsec): min=6976, max=41411, avg=12294.78, stdev=4159.38 00:31:31.610 clat (usec): min=7532, max=57863, avg=13936.03, stdev=11304.09 00:31:31.610 lat (usec): min=7543, max=57875, avg=13948.33, stdev=11304.02 00:31:31.610 clat percentiles (usec): 00:31:31.610 | 1.00th=[ 7963], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:31:31.610 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10814], 60.00th=[11076], 00:31:31.610 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12780], 95.00th=[51643], 00:31:31.610 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56886], 99.95th=[57934], 00:31:31.610 | 99.99th=[57934] 00:31:31.610 bw ( KiB/s): min=16896, max=36864, per=40.37%, avg=27622.40, stdev=6830.35, samples=10 00:31:31.610 iops : min= 132, max= 288, avg=215.80, stdev=53.36, samples=10 00:31:31.610 lat (msec) : 10=24.95%, 20=67.10%, 50=0.28%, 100=7.67% 00:31:31.610 cpu : usr=93.48%, sys=5.53%, ctx=11, majf=0, minf=111 00:31:31.610 IO depths : 1=3.0%, 2=97.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.610 issued rwts: total=1082,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.610 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.610 filename0: (groupid=0, jobs=1): err= 0: pid=681789: Sat Jul 20 17:23:47 2024 00:31:31.610 read: IOPS=145, BW=18.2MiB/s (19.1MB/s)(91.2MiB/5011msec) 00:31:31.610 slat (nsec): min=4133, max=38260, avg=15307.12, stdev=5176.66 00:31:31.610 clat (usec): min=8200, max=96554, avg=20567.12, stdev=15307.84 00:31:31.610 lat (usec): min=8213, max=96574, avg=20582.43, stdev=15308.09 00:31:31.610 clat percentiles (usec): 00:31:31.610 | 1.00th=[ 8586], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[13304], 00:31:31.610 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15139], 60.00th=[15664], 00:31:31.610 | 70.00th=[16057], 80.00th=[16909], 90.00th=[54789], 95.00th=[56361], 00:31:31.610 | 99.00th=[58459], 99.50th=[59507], 99.90th=[96994], 99.95th=[96994], 00:31:31.610 | 99.99th=[96994] 00:31:31.610 bw ( KiB/s): min=10240, max=23808, per=27.20%, avg=18611.20, stdev=4234.20, samples=10 00:31:31.610 iops : min= 80, max= 186, avg=145.40, stdev=33.08, samples=10 00:31:31.610 lat (msec) : 10=5.89%, 20=78.77%, 50=0.55%, 100=14.79% 00:31:31.610 cpu : usr=94.39%, sys=4.43%, ctx=79, majf=0, minf=102 00:31:31.610 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.610 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.610 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.610 00:31:31.610 Run status group 0 (all jobs): 00:31:31.610 READ: bw=66.8MiB/s (70.1MB/s), 18.2MiB/s-26.8MiB/s (19.1MB/s-28.1MB/s), io=337MiB (354MB), run=5006-5049msec 00:31:31.868 17:23:47 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:31.868 17:23:47 -- target/dif.sh@43 -- # local sub 00:31:31.868 17:23:47 -- target/dif.sh@45 -- # for sub in "$@" 00:31:31.868 17:23:47 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:31.868 17:23:47 -- target/dif.sh@36 -- # local sub_id=0 00:31:31.868 17:23:47 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:31.868 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.868 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.868 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.868 17:23:47 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:31.868 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.868 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.868 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.868 17:23:47 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:31.868 17:23:47 -- target/dif.sh@109 -- # bs=4k 00:31:31.868 17:23:47 -- target/dif.sh@109 -- # numjobs=8 00:31:31.868 17:23:47 -- target/dif.sh@109 -- # iodepth=16 00:31:31.868 17:23:47 -- target/dif.sh@109 -- # runtime= 00:31:31.868 17:23:47 -- target/dif.sh@109 -- # files=2 00:31:31.868 17:23:47 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:31.868 17:23:47 -- target/dif.sh@28 -- # local sub 00:31:31.868 17:23:47 -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.868 17:23:47 -- target/dif.sh@31 -- # create_subsystem 0 00:31:31.868 17:23:47 -- target/dif.sh@18 -- # local sub_id=0 00:31:31.869 17:23:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 bdev_null0 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 [2024-07-20 17:23:47.852652] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.869 17:23:47 -- target/dif.sh@31 -- # create_subsystem 1 00:31:31.869 17:23:47 -- target/dif.sh@18 -- # local sub_id=1 00:31:31.869 17:23:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 bdev_null1 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@30 -- # for sub in "$@" 00:31:31.869 17:23:47 -- target/dif.sh@31 -- # create_subsystem 2 00:31:31.869 17:23:47 -- target/dif.sh@18 -- # local sub_id=2 00:31:31.869 17:23:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 bdev_null2 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:31.869 17:23:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:31.869 17:23:47 -- common/autotest_common.sh@10 -- # set +x 00:31:31.869 17:23:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:31.869 17:23:47 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:31.869 17:23:47 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:31.869 17:23:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.869 17:23:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:31.869 17:23:47 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.869 17:23:47 -- nvmf/common.sh@520 -- # config=() 00:31:31.869 17:23:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:31.869 17:23:47 -- target/dif.sh@82 -- # gen_fio_conf 00:31:31.869 17:23:47 -- nvmf/common.sh@520 -- # local subsystem config 00:31:31.869 17:23:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:31.869 17:23:47 -- target/dif.sh@54 -- # local file 00:31:31.869 17:23:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:31.869 17:23:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:31.869 17:23:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:31.869 17:23:47 -- target/dif.sh@56 -- # cat 00:31:31.869 17:23:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:31.869 { 00:31:31.869 "params": { 00:31:31.869 "name": "Nvme$subsystem", 00:31:31.869 "trtype": "$TEST_TRANSPORT", 00:31:31.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.869 "adrfam": "ipv4", 00:31:31.869 "trsvcid": "$NVMF_PORT", 00:31:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.869 "hdgst": ${hdgst:-false}, 00:31:31.869 "ddgst": ${ddgst:-false} 00:31:31.869 }, 00:31:31.869 "method": "bdev_nvme_attach_controller" 00:31:31.869 } 00:31:31.869 EOF 00:31:31.869 )") 00:31:31.869 17:23:47 -- common/autotest_common.sh@1320 -- # shift 00:31:31.869 17:23:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:31.869 17:23:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.869 17:23:47 -- nvmf/common.sh@542 -- # cat 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:31.869 17:23:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:31.869 17:23:47 -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:31.869 17:23:47 -- target/dif.sh@73 -- # cat 00:31:31.869 17:23:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:31.869 17:23:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:31.869 { 00:31:31.869 "params": { 00:31:31.869 "name": "Nvme$subsystem", 00:31:31.869 "trtype": "$TEST_TRANSPORT", 00:31:31.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.869 "adrfam": "ipv4", 00:31:31.869 "trsvcid": "$NVMF_PORT", 00:31:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.869 "hdgst": ${hdgst:-false}, 00:31:31.869 "ddgst": ${ddgst:-false} 00:31:31.869 }, 00:31:31.869 "method": "bdev_nvme_attach_controller" 00:31:31.869 } 00:31:31.869 EOF 00:31:31.869 )") 00:31:31.869 17:23:47 -- target/dif.sh@72 -- # (( file++ )) 00:31:31.869 17:23:47 -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.869 17:23:47 -- target/dif.sh@73 -- # cat 00:31:31.869 17:23:47 -- nvmf/common.sh@542 -- # cat 00:31:31.869 17:23:47 -- target/dif.sh@72 -- # (( file++ )) 00:31:31.869 17:23:47 -- target/dif.sh@72 -- # (( file <= files )) 00:31:31.869 17:23:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:31.869 17:23:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:31.869 { 00:31:31.869 "params": { 00:31:31.869 "name": "Nvme$subsystem", 00:31:31.869 "trtype": "$TEST_TRANSPORT", 00:31:31.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:31.869 "adrfam": "ipv4", 00:31:31.869 "trsvcid": "$NVMF_PORT", 00:31:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:31.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:31.869 "hdgst": ${hdgst:-false}, 00:31:31.869 "ddgst": ${ddgst:-false} 00:31:31.869 }, 00:31:31.869 "method": "bdev_nvme_attach_controller" 00:31:31.869 } 00:31:31.869 EOF 00:31:31.869 )") 00:31:31.869 17:23:47 -- nvmf/common.sh@542 -- # cat 00:31:31.869 17:23:47 -- nvmf/common.sh@544 -- # jq . 00:31:31.869 17:23:47 -- nvmf/common.sh@545 -- # IFS=, 00:31:31.869 17:23:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:31.869 "params": { 00:31:31.869 "name": "Nvme0", 00:31:31.869 "trtype": "tcp", 00:31:31.869 "traddr": "10.0.0.2", 00:31:31.869 "adrfam": "ipv4", 00:31:31.869 "trsvcid": "4420", 00:31:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:31.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:31.869 "hdgst": false, 00:31:31.869 "ddgst": false 00:31:31.869 }, 00:31:31.869 "method": "bdev_nvme_attach_controller" 00:31:31.869 },{ 00:31:31.869 "params": { 00:31:31.869 "name": "Nvme1", 00:31:31.869 "trtype": "tcp", 00:31:31.869 "traddr": "10.0.0.2", 00:31:31.869 "adrfam": "ipv4", 00:31:31.869 "trsvcid": "4420", 00:31:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:31.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:31.869 "hdgst": false, 00:31:31.869 "ddgst": false 00:31:31.869 }, 00:31:31.869 "method": "bdev_nvme_attach_controller" 00:31:31.869 },{ 00:31:31.869 "params": { 00:31:31.869 "name": "Nvme2", 00:31:31.869 "trtype": "tcp", 00:31:31.869 "traddr": "10.0.0.2", 00:31:31.869 "adrfam": "ipv4", 00:31:31.869 "trsvcid": "4420", 00:31:31.869 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:31.869 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:31.869 "hdgst": false, 00:31:31.869 "ddgst": false 00:31:31.869 }, 00:31:31.869 "method": "bdev_nvme_attach_controller" 00:31:31.869 }' 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:31.869 17:23:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:31.869 17:23:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:31.869 17:23:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:31.869 17:23:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:31.869 17:23:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:31.869 17:23:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:32.126 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:32.126 ... 00:31:32.126 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:32.126 ... 00:31:32.126 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:32.126 ... 00:31:32.126 fio-3.35 00:31:32.126 Starting 24 threads 00:31:32.126 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.059 [2024-07-20 17:23:49.048205] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:33.059 [2024-07-20 17:23:49.048276] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:45.275 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682676: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=313, BW=1255KiB/s (1285kB/s)(12.4MiB/10108msec) 00:31:45.275 slat (usec): min=3, max=156, avg=21.84, stdev=10.68 00:31:45.275 clat (msec): min=6, max=416, avg=50.65, stdev=70.64 00:31:45.275 lat (msec): min=6, max=416, avg=50.67, stdev=70.64 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 16], 5.00th=[ 20], 10.00th=[ 24], 20.00th=[ 28], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:31:45.275 | 70.00th=[ 31], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 253], 00:31:45.275 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 418], 99.95th=[ 418], 00:31:45.275 | 99.99th=[ 418] 00:31:45.275 bw ( KiB/s): min= 128, max= 2400, per=4.47%, avg=1262.40, stdev=974.47, samples=20 00:31:45.275 iops : min= 32, max= 600, avg=315.60, stdev=243.62, samples=20 00:31:45.275 lat (msec) : 10=0.47%, 20=5.42%, 50=85.47%, 100=0.06%, 250=2.84% 00:31:45.275 lat (msec) : 500=5.74% 00:31:45.275 cpu : usr=97.66%, sys=1.58%, ctx=80, majf=0, minf=39 00:31:45.275 IO depths : 1=2.1%, 2=4.9%, 4=16.7%, 8=64.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=93.1%, 8=2.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=3172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682677: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=305, BW=1224KiB/s (1253kB/s)(12.1MiB/10103msec) 00:31:45.275 slat (usec): min=3, max=928, avg=23.97, stdev=42.52 00:31:45.275 clat (msec): min=5, max=391, avg=52.02, stdev=67.62 00:31:45.275 lat (msec): min=5, max=391, avg=52.05, stdev=67.62 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 12], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.275 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 48], 95.00th=[ 253], 00:31:45.275 | 99.00th=[ 300], 99.50th=[ 355], 99.90th=[ 393], 99.95th=[ 393], 00:31:45.275 | 99.99th=[ 393] 00:31:45.275 bw ( KiB/s): min= 208, max= 2372, per=4.35%, avg=1230.20, stdev=921.68, samples=20 00:31:45.275 iops : min= 52, max= 593, avg=307.55, stdev=230.42, samples=20 00:31:45.275 lat (msec) : 10=0.71%, 20=4.27%, 50=85.02%, 100=0.42%, 250=4.08% 00:31:45.275 lat (msec) : 500=5.50% 00:31:45.275 cpu : usr=91.73%, sys=3.55%, ctx=217, majf=0, minf=21 00:31:45.275 IO depths : 1=0.3%, 2=0.7%, 4=8.8%, 8=75.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=91.0%, 8=6.0%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=3091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682678: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=281, BW=1126KiB/s (1153kB/s)(11.1MiB/10082msec) 00:31:45.275 slat (usec): min=7, max=907, avg=29.62, stdev=32.19 00:31:45.275 clat (msec): min=14, max=548, avg=56.64, stdev=87.02 00:31:45.275 lat (msec): min=14, max=549, avg=56.67, stdev=87.03 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 32], 60.00th=[ 36], 00:31:45.275 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 45], 95.00th=[ 326], 00:31:45.275 | 99.00th=[ 422], 99.50th=[ 443], 99.90th=[ 523], 99.95th=[ 550], 00:31:45.275 | 99.99th=[ 550] 00:31:45.275 bw ( KiB/s): min= 128, max= 2176, per=3.99%, avg=1128.40, stdev=901.24, samples=20 00:31:45.275 iops : min= 32, max= 544, avg=282.10, stdev=225.31, samples=20 00:31:45.275 lat (msec) : 20=4.05%, 50=87.35%, 100=1.27%, 250=0.78%, 500=6.42% 00:31:45.275 lat (msec) : 750=0.14% 00:31:45.275 cpu : usr=90.72%, sys=4.17%, ctx=564, majf=0, minf=21 00:31:45.275 IO depths : 1=2.9%, 2=6.1%, 4=17.3%, 8=62.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=92.7%, 8=2.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=2837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682679: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=298, BW=1195KiB/s (1224kB/s)(11.7MiB/10041msec) 00:31:45.275 slat (usec): min=7, max=139, avg=29.76, stdev=22.07 00:31:45.275 clat (msec): min=15, max=415, avg=53.38, stdev=68.53 00:31:45.275 lat (msec): min=15, max=415, avg=53.41, stdev=68.53 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:31:45.275 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 52], 95.00th=[ 251], 00:31:45.275 | 99.00th=[ 313], 99.50th=[ 359], 99.90th=[ 414], 99.95th=[ 418], 00:31:45.275 | 99.99th=[ 418] 00:31:45.275 bw ( KiB/s): min= 176, max= 2112, per=4.22%, avg=1194.00, stdev=898.72, samples=20 00:31:45.275 iops : min= 44, max= 528, avg=298.50, stdev=224.68, samples=20 00:31:45.275 lat (msec) : 20=1.87%, 50=88.04%, 100=0.43%, 250=4.33%, 500=5.33% 00:31:45.275 cpu : usr=97.30%, sys=1.70%, ctx=183, majf=0, minf=24 00:31:45.275 IO depths : 1=0.1%, 2=0.5%, 4=10.4%, 8=74.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=91.4%, 8=4.6%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=3001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682680: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=293, BW=1172KiB/s (1201kB/s)(11.5MiB/10078msec) 00:31:45.275 slat (usec): min=3, max=131, avg=23.67, stdev=13.24 00:31:45.275 clat (msec): min=14, max=416, avg=54.25, stdev=72.29 00:31:45.275 lat (msec): min=14, max=416, avg=54.28, stdev=72.28 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 19], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:31:45.275 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 47], 95.00th=[ 257], 00:31:45.275 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 418], 99.95th=[ 418], 00:31:45.275 | 99.99th=[ 418] 00:31:45.275 bw ( KiB/s): min= 128, max= 2176, per=4.16%, avg=1175.20, stdev=893.27, samples=20 00:31:45.275 iops : min= 32, max= 544, avg=293.80, stdev=223.32, samples=20 00:31:45.275 lat (msec) : 20=2.64%, 50=87.61%, 100=0.54%, 250=2.03%, 500=7.18% 00:31:45.275 cpu : usr=98.15%, sys=1.36%, ctx=58, majf=0, minf=26 00:31:45.275 IO depths : 1=1.0%, 2=2.5%, 4=14.0%, 8=69.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=92.3%, 8=3.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=2954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682681: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=288, BW=1153KiB/s (1180kB/s)(11.3MiB/10077msec) 00:31:45.275 slat (usec): min=3, max=213, avg=39.92, stdev=33.93 00:31:45.275 clat (msec): min=16, max=495, avg=55.30, stdev=86.06 00:31:45.275 lat (msec): min=16, max=495, avg=55.34, stdev=86.05 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.275 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 47], 95.00th=[ 334], 00:31:45.275 | 99.00th=[ 422], 99.50th=[ 430], 99.90th=[ 493], 99.95th=[ 498], 00:31:45.275 | 99.99th=[ 498] 00:31:45.275 bw ( KiB/s): min= 128, max= 2144, per=4.09%, avg=1155.35, stdev=919.02, samples=20 00:31:45.275 iops : min= 32, max= 536, avg=288.80, stdev=229.72, samples=20 00:31:45.275 lat (msec) : 20=1.65%, 50=89.74%, 100=0.90%, 250=1.17%, 500=6.54% 00:31:45.275 cpu : usr=95.73%, sys=2.05%, ctx=112, majf=0, minf=29 00:31:45.275 IO depths : 1=0.7%, 2=1.6%, 4=13.4%, 8=71.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=92.3%, 8=3.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=2904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682682: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=288, BW=1153KiB/s (1181kB/s)(11.3MiB/10074msec) 00:31:45.275 slat (usec): min=3, max=640, avg=26.01, stdev=25.65 00:31:45.275 clat (msec): min=17, max=503, avg=55.37, stdev=84.72 00:31:45.275 lat (msec): min=17, max=503, avg=55.39, stdev=84.72 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.275 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 321], 00:31:45.275 | 99.00th=[ 422], 99.50th=[ 443], 99.90th=[ 477], 99.95th=[ 506], 00:31:45.275 | 99.99th=[ 506] 00:31:45.275 bw ( KiB/s): min= 128, max= 2224, per=4.09%, avg=1155.20, stdev=914.34, samples=20 00:31:45.275 iops : min= 32, max= 556, avg=288.80, stdev=228.59, samples=20 00:31:45.275 lat (msec) : 20=1.07%, 50=89.77%, 100=1.45%, 250=1.24%, 500=6.40% 00:31:45.275 lat (msec) : 750=0.07% 00:31:45.275 cpu : usr=92.18%, sys=3.60%, ctx=241, majf=0, minf=21 00:31:45.275 IO depths : 1=0.3%, 2=1.0%, 4=9.9%, 8=74.0%, 16=14.8%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=91.5%, 8=5.4%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=2904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename0: (groupid=0, jobs=1): err= 0: pid=682683: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=280, BW=1123KiB/s (1150kB/s)(11.0MiB/10028msec) 00:31:45.275 slat (usec): min=5, max=173, avg=58.13, stdev=41.89 00:31:45.275 clat (msec): min=15, max=481, avg=56.69, stdev=79.14 00:31:45.275 lat (msec): min=15, max=481, avg=56.75, stdev=79.14 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.275 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 35], 00:31:45.275 | 70.00th=[ 38], 80.00th=[ 41], 90.00th=[ 54], 95.00th=[ 275], 00:31:45.275 | 99.00th=[ 380], 99.50th=[ 418], 99.90th=[ 426], 99.95th=[ 481], 00:31:45.275 | 99.99th=[ 481] 00:31:45.275 bw ( KiB/s): min= 128, max= 2144, per=3.96%, avg=1120.00, stdev=868.96, samples=20 00:31:45.275 iops : min= 32, max= 536, avg=280.00, stdev=217.24, samples=20 00:31:45.275 lat (msec) : 20=0.96%, 50=88.25%, 100=2.06%, 250=1.28%, 500=7.46% 00:31:45.275 cpu : usr=96.80%, sys=1.52%, ctx=67, majf=0, minf=33 00:31:45.275 IO depths : 1=0.1%, 2=0.6%, 4=10.8%, 8=74.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:45.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 complete : 0=0.0%, 4=91.6%, 8=4.7%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.275 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.275 filename1: (groupid=0, jobs=1): err= 0: pid=682684: Sat Jul 20 17:23:59 2024 00:31:45.275 read: IOPS=311, BW=1248KiB/s (1278kB/s)(12.3MiB/10106msec) 00:31:45.275 slat (usec): min=3, max=144, avg=21.42, stdev=15.17 00:31:45.275 clat (msec): min=6, max=383, avg=50.95, stdev=67.18 00:31:45.275 lat (msec): min=6, max=383, avg=50.97, stdev=67.17 00:31:45.275 clat percentiles (msec): 00:31:45.275 | 1.00th=[ 8], 5.00th=[ 19], 10.00th=[ 27], 20.00th=[ 28], 00:31:45.275 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.275 | 70.00th=[ 32], 80.00th=[ 39], 90.00th=[ 48], 95.00th=[ 251], 00:31:45.275 | 99.00th=[ 292], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 384], 00:31:45.275 | 99.99th=[ 384] 00:31:45.275 bw ( KiB/s): min= 176, max= 2304, per=4.44%, avg=1254.40, stdev=947.82, samples=20 00:31:45.275 iops : min= 44, max= 576, avg=313.60, stdev=236.95, samples=20 00:31:45.276 lat (msec) : 10=1.02%, 20=5.30%, 50=83.91%, 100=0.44%, 250=3.74% 00:31:45.276 lat (msec) : 500=5.58% 00:31:45.276 cpu : usr=98.46%, sys=1.00%, ctx=16, majf=0, minf=51 00:31:45.276 IO depths : 1=3.6%, 2=7.8%, 4=20.4%, 8=58.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=94.0%, 8=0.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682685: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.4MiB/10081msec) 00:31:45.276 slat (usec): min=7, max=134, avg=22.81, stdev=13.23 00:31:45.276 clat (msec): min=15, max=405, avg=50.79, stdev=65.52 00:31:45.276 lat (msec): min=15, max=405, avg=50.81, stdev=65.52 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:31:45.276 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 47], 95.00th=[ 249], 00:31:45.276 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 405], 00:31:45.276 | 99.99th=[ 405] 00:31:45.276 bw ( KiB/s): min= 256, max= 2240, per=4.46%, avg=1260.40, stdev=945.68, samples=20 00:31:45.276 iops : min= 64, max= 560, avg=315.10, stdev=236.42, samples=20 00:31:45.276 lat (msec) : 20=2.21%, 50=88.00%, 100=0.19%, 250=4.67%, 500=4.93% 00:31:45.276 cpu : usr=98.37%, sys=1.15%, ctx=28, majf=0, minf=24 00:31:45.276 IO depths : 1=0.9%, 2=2.7%, 4=16.1%, 8=67.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=93.1%, 8=2.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=3167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682686: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=289, BW=1158KiB/s (1186kB/s)(11.4MiB/10080msec) 00:31:45.276 slat (usec): min=3, max=141, avg=28.37, stdev=23.46 00:31:45.276 clat (msec): min=16, max=462, avg=55.08, stdev=85.96 00:31:45.276 lat (msec): min=16, max=462, avg=55.11, stdev=85.96 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 20], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.276 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 334], 00:31:45.276 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 464], 99.95th=[ 464], 00:31:45.276 | 99.99th=[ 464] 00:31:45.276 bw ( KiB/s): min= 128, max= 2152, per=4.11%, avg=1161.20, stdev=927.93, samples=20 00:31:45.276 iops : min= 32, max= 538, avg=290.30, stdev=231.98, samples=20 00:31:45.276 lat (msec) : 20=1.68%, 50=89.48%, 100=1.30%, 250=0.96%, 500=6.58% 00:31:45.276 cpu : usr=98.41%, sys=1.16%, ctx=22, majf=0, minf=32 00:31:45.276 IO depths : 1=0.4%, 2=1.2%, 4=12.4%, 8=71.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.1%, 8=3.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682687: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=301, BW=1205KiB/s (1234kB/s)(11.9MiB/10081msec) 00:31:45.276 slat (usec): min=7, max=160, avg=48.67, stdev=39.42 00:31:45.276 clat (msec): min=13, max=305, avg=52.63, stdev=65.67 00:31:45.276 lat (msec): min=13, max=305, avg=52.68, stdev=65.66 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.276 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 178], 95.00th=[ 251], 00:31:45.276 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 292], 99.95th=[ 305], 00:31:45.276 | 99.99th=[ 305] 00:31:45.276 bw ( KiB/s): min= 256, max= 2224, per=4.27%, avg=1208.00, stdev=895.43, samples=20 00:31:45.276 iops : min= 64, max= 556, avg=302.00, stdev=223.86, samples=20 00:31:45.276 lat (msec) : 20=3.66%, 50=85.94%, 100=0.40%, 250=4.87%, 500=5.14% 00:31:45.276 cpu : usr=98.50%, sys=1.07%, ctx=14, majf=0, minf=38 00:31:45.276 IO depths : 1=1.1%, 2=4.0%, 4=17.6%, 8=65.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=93.2%, 8=1.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=3036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682688: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=279, BW=1117KiB/s (1144kB/s)(11.0MiB/10080msec) 00:31:45.276 slat (nsec): min=3822, max=60028, avg=21075.17, stdev=10609.14 00:31:45.276 clat (msec): min=13, max=549, avg=57.15, stdev=87.89 00:31:45.276 lat (msec): min=13, max=549, avg=57.17, stdev=87.88 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 19], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 36], 00:31:45.276 | 70.00th=[ 39], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 326], 00:31:45.276 | 99.00th=[ 422], 99.50th=[ 443], 99.90th=[ 523], 99.95th=[ 550], 00:31:45.276 | 99.99th=[ 550] 00:31:45.276 bw ( KiB/s): min= 128, max= 2256, per=3.96%, avg=1119.60, stdev=900.89, samples=20 00:31:45.276 iops : min= 32, max= 564, avg=279.90, stdev=225.22, samples=20 00:31:45.276 lat (msec) : 20=1.81%, 50=89.52%, 100=1.28%, 250=0.36%, 500=6.89% 00:31:45.276 lat (msec) : 750=0.14% 00:31:45.276 cpu : usr=98.58%, sys=1.03%, ctx=12, majf=0, minf=17 00:31:45.276 IO depths : 1=0.4%, 2=1.4%, 4=13.1%, 8=71.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.1%, 8=3.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682689: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=278, BW=1114KiB/s (1141kB/s)(11.0MiB/10071msec) 00:31:45.276 slat (usec): min=5, max=161, avg=28.35, stdev=22.46 00:31:45.276 clat (msec): min=15, max=538, avg=57.07, stdev=88.51 00:31:45.276 lat (msec): min=15, max=538, avg=57.10, stdev=88.51 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 34], 00:31:45.276 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 45], 95.00th=[ 355], 00:31:45.276 | 99.00th=[ 418], 99.50th=[ 481], 99.90th=[ 542], 99.95th=[ 542], 00:31:45.276 | 99.99th=[ 542] 00:31:45.276 bw ( KiB/s): min= 128, max= 2192, per=3.95%, avg=1116.00, stdev=899.24, samples=20 00:31:45.276 iops : min= 32, max= 548, avg=279.00, stdev=224.81, samples=20 00:31:45.276 lat (msec) : 20=1.71%, 50=89.45%, 100=1.43%, 250=0.50%, 500=6.70% 00:31:45.276 lat (msec) : 750=0.21% 00:31:45.276 cpu : usr=98.20%, sys=1.35%, ctx=19, majf=0, minf=23 00:31:45.276 IO depths : 1=0.9%, 2=2.0%, 4=12.4%, 8=71.0%, 16=13.7%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.0%, 8=4.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682690: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=299, BW=1198KiB/s (1227kB/s)(11.7MiB/10042msec) 00:31:45.276 slat (usec): min=7, max=173, avg=62.40, stdev=38.13 00:31:45.276 clat (msec): min=13, max=517, avg=53.11, stdev=72.44 00:31:45.276 lat (msec): min=13, max=518, avg=53.17, stdev=72.42 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 18], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.276 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 51], 95.00th=[ 255], 00:31:45.276 | 99.00th=[ 342], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:31:45.276 | 99.99th=[ 518] 00:31:45.276 bw ( KiB/s): min= 128, max= 2144, per=4.23%, avg=1196.40, stdev=912.36, samples=20 00:31:45.276 iops : min= 32, max= 536, avg=299.10, stdev=228.09, samples=20 00:31:45.276 lat (msec) : 20=3.16%, 50=86.83%, 100=0.96%, 250=3.39%, 500=5.12% 00:31:45.276 lat (msec) : 750=0.53% 00:31:45.276 cpu : usr=98.17%, sys=1.11%, ctx=32, majf=0, minf=30 00:31:45.276 IO depths : 1=0.7%, 2=1.7%, 4=12.9%, 8=71.4%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.1%, 8=3.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=3007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename1: (groupid=0, jobs=1): err= 0: pid=682691: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=281, BW=1124KiB/s (1151kB/s)(11.1MiB/10068msec) 00:31:45.276 slat (usec): min=7, max=153, avg=39.03, stdev=34.41 00:31:45.276 clat (msec): min=16, max=537, avg=56.52, stdev=87.61 00:31:45.276 lat (msec): min=16, max=537, avg=56.56, stdev=87.60 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 33], 00:31:45.276 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 47], 95.00th=[ 355], 00:31:45.276 | 99.00th=[ 418], 99.50th=[ 422], 99.90th=[ 498], 99.95th=[ 542], 00:31:45.276 | 99.99th=[ 542] 00:31:45.276 bw ( KiB/s): min= 128, max= 2136, per=3.98%, avg=1125.60, stdev=898.74, samples=20 00:31:45.276 iops : min= 32, max= 534, avg=281.40, stdev=224.69, samples=20 00:31:45.276 lat (msec) : 20=1.31%, 50=89.54%, 100=1.80%, 250=0.28%, 500=7.00% 00:31:45.276 lat (msec) : 750=0.07% 00:31:45.276 cpu : usr=98.44%, sys=1.13%, ctx=22, majf=0, minf=29 00:31:45.276 IO depths : 1=0.4%, 2=1.1%, 4=11.0%, 8=73.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=91.6%, 8=4.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2830,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename2: (groupid=0, jobs=1): err= 0: pid=682692: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=298, BW=1195KiB/s (1224kB/s)(11.7MiB/10041msec) 00:31:45.276 slat (nsec): min=7800, max=59729, avg=20029.18, stdev=9857.75 00:31:45.276 clat (msec): min=14, max=386, avg=53.44, stdev=71.02 00:31:45.276 lat (msec): min=14, max=386, avg=53.46, stdev=71.02 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.276 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 46], 95.00th=[ 255], 00:31:45.276 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 388], 99.95th=[ 388], 00:31:45.276 | 99.99th=[ 388] 00:31:45.276 bw ( KiB/s): min= 128, max= 2152, per=4.22%, avg=1193.60, stdev=910.77, samples=20 00:31:45.276 iops : min= 32, max= 538, avg=298.40, stdev=227.69, samples=20 00:31:45.276 lat (msec) : 20=1.53%, 50=89.00%, 100=0.40%, 250=2.53%, 500=6.53% 00:31:45.276 cpu : usr=98.47%, sys=1.11%, ctx=13, majf=0, minf=29 00:31:45.276 IO depths : 1=0.6%, 2=1.6%, 4=12.7%, 8=71.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.3%, 8=3.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=3000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename2: (groupid=0, jobs=1): err= 0: pid=682693: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=291, BW=1166KiB/s (1194kB/s)(11.5MiB/10076msec) 00:31:45.276 slat (nsec): min=3877, max=52208, avg=20240.06, stdev=10253.61 00:31:45.276 clat (msec): min=15, max=521, avg=54.74, stdev=77.48 00:31:45.276 lat (msec): min=15, max=521, avg=54.76, stdev=77.48 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 18], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:31:45.276 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 50], 95.00th=[ 262], 00:31:45.276 | 99.00th=[ 384], 99.50th=[ 401], 99.90th=[ 523], 99.95th=[ 523], 00:31:45.276 | 99.99th=[ 523] 00:31:45.276 bw ( KiB/s): min= 128, max= 2160, per=4.13%, avg=1168.00, stdev=904.32, samples=20 00:31:45.276 iops : min= 32, max= 540, avg=292.00, stdev=226.08, samples=20 00:31:45.276 lat (msec) : 20=2.11%, 50=87.94%, 100=1.23%, 250=2.66%, 500=5.86% 00:31:45.276 lat (msec) : 750=0.20% 00:31:45.276 cpu : usr=98.42%, sys=1.08%, ctx=16, majf=0, minf=21 00:31:45.276 IO depths : 1=0.6%, 2=1.6%, 4=12.1%, 8=71.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=91.9%, 8=4.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename2: (groupid=0, jobs=1): err= 0: pid=682694: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=297, BW=1189KiB/s (1217kB/s)(11.7MiB/10041msec) 00:31:45.276 slat (usec): min=7, max=134, avg=25.12, stdev=15.38 00:31:45.276 clat (msec): min=15, max=417, avg=53.69, stdev=75.54 00:31:45.276 lat (msec): min=15, max=417, avg=53.72, stdev=75.54 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:31:45.276 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 46], 95.00th=[ 266], 00:31:45.276 | 99.00th=[ 380], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:31:45.276 | 99.99th=[ 418] 00:31:45.276 bw ( KiB/s): min= 128, max= 2176, per=4.20%, avg=1187.20, stdev=915.88, samples=20 00:31:45.276 iops : min= 32, max= 544, avg=296.80, stdev=228.97, samples=20 00:31:45.276 lat (msec) : 20=1.94%, 50=89.14%, 100=0.34%, 250=1.61%, 500=6.97% 00:31:45.276 cpu : usr=98.08%, sys=1.44%, ctx=21, majf=0, minf=29 00:31:45.276 IO depths : 1=0.3%, 2=1.2%, 4=12.2%, 8=71.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.2%, 8=3.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename2: (groupid=0, jobs=1): err= 0: pid=682695: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=297, BW=1192KiB/s (1220kB/s)(11.7MiB/10041msec) 00:31:45.276 slat (usec): min=7, max=151, avg=34.33, stdev=32.00 00:31:45.276 clat (msec): min=13, max=420, avg=53.50, stdev=78.32 00:31:45.276 lat (msec): min=13, max=420, avg=53.53, stdev=78.32 00:31:45.276 clat percentiles (msec): 00:31:45.276 | 1.00th=[ 19], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.276 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.276 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 275], 00:31:45.276 | 99.00th=[ 414], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:31:45.276 | 99.99th=[ 422] 00:31:45.276 bw ( KiB/s): min= 128, max= 2144, per=4.21%, avg=1190.00, stdev=932.86, samples=20 00:31:45.276 iops : min= 32, max= 536, avg=297.50, stdev=233.22, samples=20 00:31:45.276 lat (msec) : 20=2.81%, 50=88.43%, 100=0.74%, 250=1.40%, 500=6.62% 00:31:45.276 cpu : usr=98.35%, sys=1.22%, ctx=13, majf=0, minf=23 00:31:45.276 IO depths : 1=1.3%, 2=2.8%, 4=14.9%, 8=68.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:45.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 complete : 0=0.0%, 4=92.6%, 8=2.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.276 issued rwts: total=2991,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.276 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.276 filename2: (groupid=0, jobs=1): err= 0: pid=682696: Sat Jul 20 17:23:59 2024 00:31:45.276 read: IOPS=287, BW=1148KiB/s (1176kB/s)(11.3MiB/10069msec) 00:31:45.276 slat (usec): min=7, max=1100, avg=24.76, stdev=38.77 00:31:45.276 clat (msec): min=13, max=494, avg=55.61, stdev=86.42 00:31:45.276 lat (msec): min=13, max=494, avg=55.63, stdev=86.42 00:31:45.276 clat percentiles (msec): 00:31:45.277 | 1.00th=[ 18], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.277 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:31:45.277 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 46], 95.00th=[ 334], 00:31:45.277 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 468], 99.95th=[ 493], 00:31:45.277 | 99.99th=[ 493] 00:31:45.277 bw ( KiB/s): min= 128, max= 2160, per=4.07%, avg=1149.60, stdev=916.61, samples=20 00:31:45.277 iops : min= 32, max= 540, avg=287.40, stdev=229.15, samples=20 00:31:45.277 lat (msec) : 20=2.18%, 50=89.00%, 100=1.28%, 250=0.83%, 500=6.71% 00:31:45.277 cpu : usr=91.74%, sys=3.40%, ctx=143, majf=0, minf=33 00:31:45.277 IO depths : 1=0.3%, 2=1.1%, 4=10.8%, 8=73.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 complete : 0=0.0%, 4=91.6%, 8=4.7%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 issued rwts: total=2890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.277 filename2: (groupid=0, jobs=1): err= 0: pid=682697: Sat Jul 20 17:23:59 2024 00:31:45.277 read: IOPS=302, BW=1211KiB/s (1240kB/s)(11.9MiB/10061msec) 00:31:45.277 slat (usec): min=4, max=156, avg=23.34, stdev=20.28 00:31:45.277 clat (msec): min=6, max=381, avg=52.70, stdev=67.93 00:31:45.277 lat (msec): min=6, max=381, avg=52.72, stdev=67.93 00:31:45.277 clat percentiles (msec): 00:31:45.277 | 1.00th=[ 16], 5.00th=[ 24], 10.00th=[ 28], 20.00th=[ 29], 00:31:45.277 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:31:45.277 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 49], 95.00th=[ 255], 00:31:45.277 | 99.00th=[ 292], 99.50th=[ 321], 99.90th=[ 384], 99.95th=[ 384], 00:31:45.277 | 99.99th=[ 384] 00:31:45.277 bw ( KiB/s): min= 176, max= 2232, per=4.29%, avg=1212.70, stdev=907.14, samples=20 00:31:45.277 iops : min= 44, max= 558, avg=303.10, stdev=226.74, samples=20 00:31:45.277 lat (msec) : 10=0.53%, 20=2.20%, 50=87.50%, 100=0.26%, 250=2.92% 00:31:45.277 lat (msec) : 500=6.60% 00:31:45.277 cpu : usr=97.55%, sys=1.40%, ctx=78, majf=0, minf=21 00:31:45.277 IO depths : 1=0.6%, 2=1.4%, 4=11.2%, 8=73.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 complete : 0=0.0%, 4=91.5%, 8=4.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 issued rwts: total=3047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.277 filename2: (groupid=0, jobs=1): err= 0: pid=682698: Sat Jul 20 17:23:59 2024 00:31:45.277 read: IOPS=302, BW=1208KiB/s (1237kB/s)(11.9MiB/10075msec) 00:31:45.277 slat (usec): min=5, max=252, avg=48.82, stdev=34.82 00:31:45.277 clat (msec): min=12, max=291, avg=52.68, stdev=64.93 00:31:45.277 lat (msec): min=12, max=291, avg=52.73, stdev=64.93 00:31:45.277 clat percentiles (msec): 00:31:45.277 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 27], 20.00th=[ 29], 00:31:45.277 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.277 | 70.00th=[ 36], 80.00th=[ 40], 90.00th=[ 78], 95.00th=[ 247], 00:31:45.277 | 99.00th=[ 275], 99.50th=[ 279], 99.90th=[ 292], 99.95th=[ 292], 00:31:45.277 | 99.99th=[ 292] 00:31:45.277 bw ( KiB/s): min= 208, max= 2176, per=4.28%, avg=1210.80, stdev=894.89, samples=20 00:31:45.277 iops : min= 52, max= 544, avg=302.70, stdev=223.72, samples=20 00:31:45.277 lat (msec) : 20=4.53%, 50=84.13%, 100=1.35%, 250=5.62%, 500=4.37% 00:31:45.277 cpu : usr=96.57%, sys=1.80%, ctx=48, majf=0, minf=18 00:31:45.277 IO depths : 1=0.2%, 2=0.3%, 4=6.2%, 8=79.0%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 complete : 0=0.0%, 4=89.5%, 8=6.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 issued rwts: total=3043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.277 filename2: (groupid=0, jobs=1): err= 0: pid=682699: Sat Jul 20 17:23:59 2024 00:31:45.277 read: IOPS=308, BW=1233KiB/s (1263kB/s)(12.2MiB/10104msec) 00:31:45.277 slat (usec): min=7, max=161, avg=52.41, stdev=39.20 00:31:45.277 clat (msec): min=7, max=293, avg=51.45, stdev=65.08 00:31:45.277 lat (msec): min=7, max=293, avg=51.51, stdev=65.07 00:31:45.277 clat percentiles (msec): 00:31:45.277 | 1.00th=[ 16], 5.00th=[ 22], 10.00th=[ 26], 20.00th=[ 28], 00:31:45.277 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 31], 00:31:45.277 | 70.00th=[ 33], 80.00th=[ 39], 90.00th=[ 51], 95.00th=[ 251], 00:31:45.277 | 99.00th=[ 275], 99.50th=[ 288], 99.90th=[ 292], 99.95th=[ 292], 00:31:45.277 | 99.99th=[ 292] 00:31:45.277 bw ( KiB/s): min= 256, max= 2192, per=4.38%, avg=1239.60, stdev=925.07, samples=20 00:31:45.277 iops : min= 64, max= 548, avg=309.90, stdev=231.27, samples=20 00:31:45.277 lat (msec) : 10=0.51%, 20=3.11%, 50=86.42%, 100=0.19%, 250=4.82% 00:31:45.277 lat (msec) : 500=4.94% 00:31:45.277 cpu : usr=98.20%, sys=1.23%, ctx=52, majf=0, minf=26 00:31:45.277 IO depths : 1=0.3%, 2=1.2%, 4=10.8%, 8=73.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:45.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 complete : 0=0.0%, 4=91.4%, 8=4.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.277 issued rwts: total=3115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:45.277 00:31:45.277 Run status group 0 (all jobs): 00:31:45.277 READ: bw=27.6MiB/s (28.9MB/s), 1114KiB/s-1257KiB/s (1141kB/s-1287kB/s), io=279MiB (293MB), run=10028-10108msec 00:31:45.277 17:23:59 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:45.277 17:23:59 -- target/dif.sh@43 -- # local sub 00:31:45.277 17:23:59 -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.277 17:23:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:45.277 17:23:59 -- target/dif.sh@36 -- # local sub_id=0 00:31:45.277 17:23:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.277 17:23:59 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:45.277 17:23:59 -- target/dif.sh@36 -- # local sub_id=1 00:31:45.277 17:23:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.277 17:23:59 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:45.277 17:23:59 -- target/dif.sh@36 -- # local sub_id=2 00:31:45.277 17:23:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:45.277 17:23:59 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:45.277 17:23:59 -- target/dif.sh@115 -- # numjobs=2 00:31:45.277 17:23:59 -- target/dif.sh@115 -- # iodepth=8 00:31:45.277 17:23:59 -- target/dif.sh@115 -- # runtime=5 00:31:45.277 17:23:59 -- target/dif.sh@115 -- # files=1 00:31:45.277 17:23:59 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:45.277 17:23:59 -- target/dif.sh@28 -- # local sub 00:31:45.277 17:23:59 -- target/dif.sh@30 -- # for sub in "$@" 00:31:45.277 17:23:59 -- target/dif.sh@31 -- # create_subsystem 0 00:31:45.277 17:23:59 -- target/dif.sh@18 -- # local sub_id=0 00:31:45.277 17:23:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 bdev_null0 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 [2024-07-20 17:23:59.677610] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@30 -- # for sub in "$@" 00:31:45.277 17:23:59 -- target/dif.sh@31 -- # create_subsystem 1 00:31:45.277 17:23:59 -- target/dif.sh@18 -- # local sub_id=1 00:31:45.277 17:23:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 bdev_null1 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.277 17:23:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:45.277 17:23:59 -- common/autotest_common.sh@10 -- # set +x 00:31:45.277 17:23:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:45.277 17:23:59 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:45.277 17:23:59 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:45.277 17:23:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:45.277 17:23:59 -- nvmf/common.sh@520 -- # config=() 00:31:45.277 17:23:59 -- nvmf/common.sh@520 -- # local subsystem config 00:31:45.277 17:23:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:45.277 17:23:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.277 17:23:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:45.277 { 00:31:45.277 "params": { 00:31:45.277 "name": "Nvme$subsystem", 00:31:45.277 "trtype": "$TEST_TRANSPORT", 00:31:45.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.277 "adrfam": "ipv4", 00:31:45.277 "trsvcid": "$NVMF_PORT", 00:31:45.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.277 "hdgst": ${hdgst:-false}, 00:31:45.277 "ddgst": ${ddgst:-false} 00:31:45.277 }, 00:31:45.277 "method": "bdev_nvme_attach_controller" 00:31:45.277 } 00:31:45.277 EOF 00:31:45.277 )") 00:31:45.277 17:23:59 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.277 17:23:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:45.277 17:23:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.277 17:23:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:45.277 17:23:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.277 17:23:59 -- target/dif.sh@82 -- # gen_fio_conf 00:31:45.277 17:23:59 -- common/autotest_common.sh@1320 -- # shift 00:31:45.277 17:23:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:45.277 17:23:59 -- target/dif.sh@54 -- # local file 00:31:45.277 17:23:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.277 17:23:59 -- target/dif.sh@56 -- # cat 00:31:45.277 17:23:59 -- nvmf/common.sh@542 -- # cat 00:31:45.277 17:23:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.277 17:23:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:45.277 17:23:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:45.277 17:23:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:45.277 17:23:59 -- target/dif.sh@72 -- # (( file <= files )) 00:31:45.277 17:23:59 -- target/dif.sh@73 -- # cat 00:31:45.277 17:23:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:45.277 17:23:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:45.277 { 00:31:45.277 "params": { 00:31:45.277 "name": "Nvme$subsystem", 00:31:45.277 "trtype": "$TEST_TRANSPORT", 00:31:45.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.277 "adrfam": "ipv4", 00:31:45.277 "trsvcid": "$NVMF_PORT", 00:31:45.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.277 "hdgst": ${hdgst:-false}, 00:31:45.277 "ddgst": ${ddgst:-false} 00:31:45.277 }, 00:31:45.277 "method": "bdev_nvme_attach_controller" 00:31:45.277 } 00:31:45.277 EOF 00:31:45.277 )") 00:31:45.277 17:23:59 -- nvmf/common.sh@542 -- # cat 00:31:45.277 17:23:59 -- target/dif.sh@72 -- # (( file++ )) 00:31:45.277 17:23:59 -- target/dif.sh@72 -- # (( file <= files )) 00:31:45.277 17:23:59 -- nvmf/common.sh@544 -- # jq . 00:31:45.277 17:23:59 -- nvmf/common.sh@545 -- # IFS=, 00:31:45.277 17:23:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:45.277 "params": { 00:31:45.277 "name": "Nvme0", 00:31:45.277 "trtype": "tcp", 00:31:45.277 "traddr": "10.0.0.2", 00:31:45.277 "adrfam": "ipv4", 00:31:45.277 "trsvcid": "4420", 00:31:45.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.277 "hdgst": false, 00:31:45.277 "ddgst": false 00:31:45.277 }, 00:31:45.277 "method": "bdev_nvme_attach_controller" 00:31:45.277 },{ 00:31:45.277 "params": { 00:31:45.277 "name": "Nvme1", 00:31:45.277 "trtype": "tcp", 00:31:45.277 "traddr": "10.0.0.2", 00:31:45.277 "adrfam": "ipv4", 00:31:45.277 "trsvcid": "4420", 00:31:45.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:45.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:45.277 "hdgst": false, 00:31:45.277 "ddgst": false 00:31:45.277 }, 00:31:45.277 "method": "bdev_nvme_attach_controller" 00:31:45.277 }' 00:31:45.277 17:23:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:45.277 17:23:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:45.277 17:23:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.278 17:23:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:45.278 17:23:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:45.278 17:23:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:45.278 17:23:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:45.278 17:23:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:45.278 17:23:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:45.278 17:23:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.278 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:45.278 ... 00:31:45.278 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:45.278 ... 00:31:45.278 fio-3.35 00:31:45.278 Starting 4 threads 00:31:45.278 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.278 [2024-07-20 17:24:00.467190] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:45.278 [2024-07-20 17:24:00.467250] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:49.450 00:31:49.450 filename0: (groupid=0, jobs=1): err= 0: pid=684157: Sat Jul 20 17:24:05 2024 00:31:49.450 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5002msec) 00:31:49.450 slat (nsec): min=3690, max=57664, avg=11376.79, stdev=5087.17 00:31:49.450 clat (usec): min=1826, max=6655, avg=3750.63, stdev=636.98 00:31:49.450 lat (usec): min=1835, max=6684, avg=3762.01, stdev=637.12 00:31:49.450 clat percentiles (usec): 00:31:49.450 | 1.00th=[ 2311], 5.00th=[ 2769], 10.00th=[ 2966], 20.00th=[ 3228], 00:31:49.450 | 30.00th=[ 3425], 40.00th=[ 3589], 50.00th=[ 3720], 60.00th=[ 3884], 00:31:49.450 | 70.00th=[ 4015], 80.00th=[ 4293], 90.00th=[ 4555], 95.00th=[ 4817], 00:31:49.450 | 99.00th=[ 5342], 99.50th=[ 5604], 99.90th=[ 6325], 99.95th=[ 6456], 00:31:49.450 | 99.99th=[ 6587] 00:31:49.450 bw ( KiB/s): min=16400, max=17488, per=26.69%, avg=16913.60, stdev=305.67, samples=10 00:31:49.450 iops : min= 2050, max= 2186, avg=2114.20, stdev=38.21, samples=10 00:31:49.450 lat (msec) : 2=0.15%, 4=68.27%, 10=31.58% 00:31:49.450 cpu : usr=94.64%, sys=4.86%, ctx=7, majf=0, minf=0 00:31:49.450 IO depths : 1=0.3%, 2=2.2%, 4=66.9%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 issued rwts: total=10577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.450 filename0: (groupid=0, jobs=1): err= 0: pid=684158: Sat Jul 20 17:24:05 2024 00:31:49.450 read: IOPS=1805, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5002msec) 00:31:49.450 slat (nsec): min=3554, max=58919, avg=10633.07, stdev=4469.97 00:31:49.450 clat (usec): min=2106, max=9722, avg=4400.19, stdev=785.37 00:31:49.450 lat (usec): min=2124, max=9733, avg=4410.83, stdev=785.40 00:31:49.450 clat percentiles (usec): 00:31:49.450 | 1.00th=[ 2900], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3752], 00:31:49.450 | 30.00th=[ 3949], 40.00th=[ 4113], 50.00th=[ 4293], 60.00th=[ 4490], 00:31:49.450 | 70.00th=[ 4752], 80.00th=[ 5080], 90.00th=[ 5473], 95.00th=[ 5735], 00:31:49.450 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[ 7635], 00:31:49.450 | 99.99th=[ 9765] 00:31:49.450 bw ( KiB/s): min=14272, max=14672, per=22.79%, avg=14439.60, stdev=109.08, samples=10 00:31:49.450 iops : min= 1784, max= 1834, avg=1804.90, stdev=13.63, samples=10 00:31:49.450 lat (msec) : 4=33.92%, 10=66.08% 00:31:49.450 cpu : usr=94.88%, sys=4.46%, ctx=8, majf=0, minf=0 00:31:49.450 IO depths : 1=0.2%, 2=1.6%, 4=67.4%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 issued rwts: total=9031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.450 filename1: (groupid=0, jobs=1): err= 0: pid=684159: Sat Jul 20 17:24:05 2024 00:31:49.450 read: IOPS=1951, BW=15.2MiB/s (16.0MB/s)(76.3MiB/5003msec) 00:31:49.450 slat (nsec): min=3655, max=45070, avg=10897.86, stdev=4687.41 00:31:49.450 clat (usec): min=2048, max=8992, avg=4067.10, stdev=690.32 00:31:49.450 lat (usec): min=2060, max=9004, avg=4078.00, stdev=690.30 00:31:49.450 clat percentiles (usec): 00:31:49.450 | 1.00th=[ 2671], 5.00th=[ 3064], 10.00th=[ 3261], 20.00th=[ 3490], 00:31:49.450 | 30.00th=[ 3687], 40.00th=[ 3851], 50.00th=[ 3982], 60.00th=[ 4178], 00:31:49.450 | 70.00th=[ 4359], 80.00th=[ 4555], 90.00th=[ 4948], 95.00th=[ 5276], 00:31:49.450 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 7898], 99.95th=[ 7963], 00:31:49.450 | 99.99th=[ 8979] 00:31:49.450 bw ( KiB/s): min=15232, max=16192, per=24.64%, avg=15611.20, stdev=316.16, samples=10 00:31:49.450 iops : min= 1904, max= 2024, avg=1951.40, stdev=39.52, samples=10 00:31:49.450 lat (msec) : 4=50.22%, 10=49.78% 00:31:49.450 cpu : usr=94.86%, sys=4.46%, ctx=9, majf=0, minf=2 00:31:49.450 IO depths : 1=0.3%, 2=2.5%, 4=67.1%, 8=30.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 issued rwts: total=9762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.450 filename1: (groupid=0, jobs=1): err= 0: pid=684160: Sat Jul 20 17:24:05 2024 00:31:49.450 read: IOPS=2050, BW=16.0MiB/s (16.8MB/s)(80.1MiB/5003msec) 00:31:49.450 slat (nsec): min=4079, max=53007, avg=14016.70, stdev=6264.50 00:31:49.450 clat (usec): min=1663, max=43234, avg=3856.80, stdev=1652.17 00:31:49.450 lat (usec): min=1673, max=43254, avg=3870.82, stdev=1652.05 00:31:49.450 clat percentiles (usec): 00:31:49.450 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 3064], 20.00th=[ 3294], 00:31:49.450 | 30.00th=[ 3458], 40.00th=[ 3589], 50.00th=[ 3752], 60.00th=[ 3916], 00:31:49.450 | 70.00th=[ 4047], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 4817], 00:31:49.450 | 99.00th=[ 5407], 99.50th=[ 5669], 99.90th=[42730], 99.95th=[42730], 00:31:49.450 | 99.99th=[43254] 00:31:49.450 bw ( KiB/s): min=15312, max=16976, per=25.88%, avg=16401.60, stdev=526.59, samples=10 00:31:49.450 iops : min= 1914, max= 2122, avg=2050.20, stdev=65.82, samples=10 00:31:49.450 lat (msec) : 2=0.07%, 4=67.56%, 10=32.22%, 50=0.16% 00:31:49.450 cpu : usr=95.42%, sys=4.10%, ctx=6, majf=0, minf=0 00:31:49.450 IO depths : 1=0.4%, 2=3.0%, 4=69.5%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.450 issued rwts: total=10259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.450 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:49.450 00:31:49.450 Run status group 0 (all jobs): 00:31:49.450 READ: bw=61.9MiB/s (64.9MB/s), 14.1MiB/s-16.5MiB/s (14.8MB/s-17.3MB/s), io=310MiB (325MB), run=5002-5003msec 00:31:49.708 17:24:05 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:49.708 17:24:05 -- target/dif.sh@43 -- # local sub 00:31:49.708 17:24:05 -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.708 17:24:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:49.708 17:24:05 -- target/dif.sh@36 -- # local sub_id=0 00:31:49.708 17:24:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:49.708 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.708 17:24:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:49.708 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.708 17:24:05 -- target/dif.sh@45 -- # for sub in "$@" 00:31:49.708 17:24:05 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:49.708 17:24:05 -- target/dif.sh@36 -- # local sub_id=1 00:31:49.708 17:24:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:49.708 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.708 17:24:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:49.708 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.708 00:31:49.708 real 0m24.087s 00:31:49.708 user 4m31.991s 00:31:49.708 sys 0m6.718s 00:31:49.708 17:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 ************************************ 00:31:49.708 END TEST fio_dif_rand_params 00:31:49.708 ************************************ 00:31:49.708 17:24:05 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:49.708 17:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:49.708 17:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 ************************************ 00:31:49.708 START TEST fio_dif_digest 00:31:49.708 ************************************ 00:31:49.708 17:24:05 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:49.708 17:24:05 -- target/dif.sh@123 -- # local NULL_DIF 00:31:49.708 17:24:05 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:49.708 17:24:05 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:49.708 17:24:05 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:49.708 17:24:05 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:49.708 17:24:05 -- target/dif.sh@127 -- # numjobs=3 00:31:49.708 17:24:05 -- target/dif.sh@127 -- # iodepth=3 00:31:49.708 17:24:05 -- target/dif.sh@127 -- # runtime=10 00:31:49.708 17:24:05 -- target/dif.sh@128 -- # hdgst=true 00:31:49.708 17:24:05 -- target/dif.sh@128 -- # ddgst=true 00:31:49.708 17:24:05 -- target/dif.sh@130 -- # create_subsystems 0 00:31:49.708 17:24:05 -- target/dif.sh@28 -- # local sub 00:31:49.708 17:24:05 -- target/dif.sh@30 -- # for sub in "$@" 00:31:49.708 17:24:05 -- target/dif.sh@31 -- # create_subsystem 0 00:31:49.708 17:24:05 -- target/dif.sh@18 -- # local sub_id=0 00:31:49.708 17:24:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:49.708 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.708 bdev_null0 00:31:49.708 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.708 17:24:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:49.708 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.708 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.972 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.972 17:24:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:49.972 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.972 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.972 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.972 17:24:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:49.972 17:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:49.972 17:24:05 -- common/autotest_common.sh@10 -- # set +x 00:31:49.972 [2024-07-20 17:24:05.883315] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.972 17:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:49.972 17:24:05 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:49.972 17:24:05 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:49.972 17:24:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:49.972 17:24:05 -- nvmf/common.sh@520 -- # config=() 00:31:49.972 17:24:05 -- nvmf/common.sh@520 -- # local subsystem config 00:31:49.972 17:24:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:49.972 17:24:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:49.972 { 00:31:49.972 "params": { 00:31:49.972 "name": "Nvme$subsystem", 00:31:49.972 "trtype": "$TEST_TRANSPORT", 00:31:49.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:49.972 "adrfam": "ipv4", 00:31:49.972 "trsvcid": "$NVMF_PORT", 00:31:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:49.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:49.972 "hdgst": ${hdgst:-false}, 00:31:49.972 "ddgst": ${ddgst:-false} 00:31:49.972 }, 00:31:49.972 "method": "bdev_nvme_attach_controller" 00:31:49.972 } 00:31:49.972 EOF 00:31:49.972 )") 00:31:49.972 17:24:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.972 17:24:05 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:49.972 17:24:05 -- target/dif.sh@82 -- # gen_fio_conf 00:31:49.972 17:24:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:49.972 17:24:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.972 17:24:05 -- target/dif.sh@54 -- # local file 00:31:49.972 17:24:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:49.972 17:24:05 -- target/dif.sh@56 -- # cat 00:31:49.972 17:24:05 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.972 17:24:05 -- common/autotest_common.sh@1320 -- # shift 00:31:49.972 17:24:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:49.972 17:24:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.972 17:24:05 -- nvmf/common.sh@542 -- # cat 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:49.972 17:24:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:49.972 17:24:05 -- target/dif.sh@72 -- # (( file <= files )) 00:31:49.972 17:24:05 -- nvmf/common.sh@544 -- # jq . 00:31:49.972 17:24:05 -- nvmf/common.sh@545 -- # IFS=, 00:31:49.972 17:24:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:49.972 "params": { 00:31:49.972 "name": "Nvme0", 00:31:49.972 "trtype": "tcp", 00:31:49.972 "traddr": "10.0.0.2", 00:31:49.972 "adrfam": "ipv4", 00:31:49.972 "trsvcid": "4420", 00:31:49.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:49.972 "hdgst": true, 00:31:49.972 "ddgst": true 00:31:49.972 }, 00:31:49.972 "method": "bdev_nvme_attach_controller" 00:31:49.972 }' 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:49.972 17:24:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:49.972 17:24:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:49.972 17:24:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:49.972 17:24:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:49.972 17:24:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:49.972 17:24:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:50.230 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:50.230 ... 00:31:50.230 fio-3.35 00:31:50.230 Starting 3 threads 00:31:50.230 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.487 [2024-07-20 17:24:06.519585] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:50.487 [2024-07-20 17:24:06.519666] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:02.687 00:32:02.687 filename0: (groupid=0, jobs=1): err= 0: pid=685004: Sat Jul 20 17:24:16 2024 00:32:02.687 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(210MiB/10047msec) 00:32:02.687 slat (nsec): min=4501, max=34835, avg=12825.47, stdev=2331.81 00:32:02.687 clat (msec): min=7, max=100, avg=17.92, stdev=12.28 00:32:02.687 lat (msec): min=7, max=100, avg=17.93, stdev=12.28 00:32:02.687 clat percentiles (msec): 00:32:02.687 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:32:02.688 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 16], 00:32:02.688 | 70.00th=[ 17], 80.00th=[ 18], 90.00th=[ 19], 95.00th=[ 56], 00:32:02.688 | 99.00th=[ 59], 99.50th=[ 59], 99.90th=[ 99], 99.95th=[ 101], 00:32:02.688 | 99.99th=[ 101] 00:32:02.688 bw ( KiB/s): min=16640, max=28160, per=34.77%, avg=21440.00, stdev=2978.36, samples=20 00:32:02.688 iops : min= 130, max= 220, avg=167.50, stdev=23.27, samples=20 00:32:02.688 lat (msec) : 10=8.52%, 20=82.72%, 50=0.30%, 100=8.40%, 250=0.06% 00:32:02.688 cpu : usr=92.39%, sys=7.03%, ctx=28, majf=0, minf=142 00:32:02.688 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:02.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.688 issued rwts: total=1678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:02.688 filename0: (groupid=0, jobs=1): err= 0: pid=685005: Sat Jul 20 17:24:16 2024 00:32:02.688 read: IOPS=170, BW=21.3MiB/s (22.3MB/s)(214MiB/10048msec) 00:32:02.688 slat (nsec): min=7578, max=58668, avg=13068.11, stdev=2501.80 00:32:02.688 clat (usec): min=8200, max=97876, avg=17565.33, stdev=11219.85 00:32:02.688 lat (usec): min=8212, max=97889, avg=17578.40, stdev=11219.81 00:32:02.688 clat percentiles (usec): 00:32:02.688 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[11469], 20.00th=[12387], 00:32:02.688 | 30.00th=[13566], 40.00th=[14746], 50.00th=[15533], 60.00th=[16057], 00:32:02.688 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18220], 95.00th=[53216], 00:32:02.688 | 99.00th=[58459], 99.50th=[94897], 99.90th=[95945], 99.95th=[98042], 00:32:02.688 | 99.99th=[98042] 00:32:02.688 bw ( KiB/s): min=16128, max=26880, per=35.47%, avg=21875.20, stdev=2824.53, samples=20 00:32:02.688 iops : min= 126, max= 210, avg=170.90, stdev=22.07, samples=20 00:32:02.688 lat (msec) : 10=1.69%, 20=91.59%, 50=0.53%, 100=6.19% 00:32:02.688 cpu : usr=92.95%, sys=6.50%, ctx=20, majf=0, minf=123 00:32:02.688 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:02.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.688 issued rwts: total=1712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:02.688 filename0: (groupid=0, jobs=1): err= 0: pid=685006: Sat Jul 20 17:24:16 2024 00:32:02.688 read: IOPS=145, BW=18.1MiB/s (19.0MB/s)(181MiB/10004msec) 00:32:02.688 slat (nsec): min=6888, max=34607, avg=13208.31, stdev=2240.98 00:32:02.688 clat (msec): min=10, max=138, avg=20.67, stdev=12.61 00:32:02.688 lat (msec): min=10, max=138, avg=20.68, stdev=12.61 00:32:02.688 clat percentiles (msec): 00:32:02.688 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 16], 00:32:02.688 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:32:02.688 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 22], 95.00th=[ 58], 00:32:02.688 | 99.00th=[ 61], 99.50th=[ 63], 99.90th=[ 99], 99.95th=[ 138], 00:32:02.688 | 99.99th=[ 138] 00:32:02.688 bw ( KiB/s): min=13824, max=23552, per=30.08%, avg=18549.10, stdev=2720.51, samples=20 00:32:02.688 iops : min= 108, max= 184, avg=144.90, stdev=21.25, samples=20 00:32:02.688 lat (msec) : 20=84.98%, 50=6.34%, 100=8.61%, 250=0.07% 00:32:02.688 cpu : usr=92.63%, sys=6.82%, ctx=14, majf=0, minf=144 00:32:02.688 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:02.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.688 issued rwts: total=1451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:02.688 00:32:02.688 Run status group 0 (all jobs): 00:32:02.688 READ: bw=60.2MiB/s (63.1MB/s), 18.1MiB/s-21.3MiB/s (19.0MB/s-22.3MB/s), io=605MiB (635MB), run=10004-10048msec 00:32:02.688 17:24:16 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:02.688 17:24:16 -- target/dif.sh@43 -- # local sub 00:32:02.688 17:24:16 -- target/dif.sh@45 -- # for sub in "$@" 00:32:02.688 17:24:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:02.688 17:24:16 -- target/dif.sh@36 -- # local sub_id=0 00:32:02.688 17:24:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:02.688 17:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.688 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:32:02.688 17:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.688 17:24:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:02.688 17:24:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:02.688 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:32:02.688 17:24:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:02.688 00:32:02.688 real 0m11.129s 00:32:02.688 user 0m29.049s 00:32:02.688 sys 0m2.306s 00:32:02.688 17:24:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.688 17:24:16 -- common/autotest_common.sh@10 -- # set +x 00:32:02.688 ************************************ 00:32:02.688 END TEST fio_dif_digest 00:32:02.688 ************************************ 00:32:02.688 17:24:17 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:02.688 17:24:17 -- target/dif.sh@147 -- # nvmftestfini 00:32:02.688 17:24:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:02.688 17:24:17 -- nvmf/common.sh@116 -- # sync 00:32:02.688 17:24:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:02.688 17:24:17 -- nvmf/common.sh@119 -- # set +e 00:32:02.688 17:24:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:02.688 17:24:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:02.688 rmmod nvme_tcp 00:32:02.688 rmmod nvme_fabrics 00:32:02.688 rmmod nvme_keyring 00:32:02.688 17:24:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:02.688 17:24:17 -- nvmf/common.sh@123 -- # set -e 00:32:02.688 17:24:17 -- nvmf/common.sh@124 -- # return 0 00:32:02.688 17:24:17 -- nvmf/common.sh@477 -- # '[' -n 678672 ']' 00:32:02.688 17:24:17 -- nvmf/common.sh@478 -- # killprocess 678672 00:32:02.688 17:24:17 -- common/autotest_common.sh@926 -- # '[' -z 678672 ']' 00:32:02.688 17:24:17 -- common/autotest_common.sh@930 -- # kill -0 678672 00:32:02.688 17:24:17 -- common/autotest_common.sh@931 -- # uname 00:32:02.688 17:24:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:02.688 17:24:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 678672 00:32:02.688 17:24:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:02.688 17:24:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:02.688 17:24:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 678672' 00:32:02.688 killing process with pid 678672 00:32:02.688 17:24:17 -- common/autotest_common.sh@945 -- # kill 678672 00:32:02.688 17:24:17 -- common/autotest_common.sh@950 -- # wait 678672 00:32:02.688 17:24:17 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:02.688 17:24:17 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:02.688 Waiting for block devices as requested 00:32:02.688 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:02.688 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:02.688 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:02.688 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:02.688 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.688 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.688 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.946 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:02.946 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:02.946 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:02.946 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:03.204 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:03.204 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:03.204 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:03.204 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:03.462 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:03.462 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:03.462 17:24:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:03.462 17:24:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:03.462 17:24:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:03.462 17:24:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:03.462 17:24:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.462 17:24:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:03.462 17:24:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.995 17:24:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:05.995 00:32:05.995 real 1m6.610s 00:32:05.995 user 6m28.215s 00:32:05.995 sys 0m18.368s 00:32:05.995 17:24:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.995 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:32:05.995 ************************************ 00:32:05.995 END TEST nvmf_dif 00:32:05.995 ************************************ 00:32:05.995 17:24:21 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:05.995 17:24:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.995 17:24:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.995 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:32:05.995 ************************************ 00:32:05.995 START TEST nvmf_abort_qd_sizes 00:32:05.995 ************************************ 00:32:05.995 17:24:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:05.995 * Looking for test storage... 00:32:05.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:05.995 17:24:21 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.995 17:24:21 -- nvmf/common.sh@7 -- # uname -s 00:32:05.995 17:24:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.995 17:24:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.995 17:24:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.995 17:24:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.995 17:24:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.995 17:24:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.995 17:24:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.995 17:24:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.995 17:24:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.995 17:24:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.995 17:24:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.995 17:24:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.995 17:24:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.995 17:24:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.995 17:24:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.995 17:24:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.995 17:24:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.995 17:24:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.995 17:24:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.995 17:24:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.995 17:24:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.995 17:24:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.995 17:24:21 -- paths/export.sh@5 -- # export PATH 00:32:05.995 17:24:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.995 17:24:21 -- nvmf/common.sh@46 -- # : 0 00:32:05.995 17:24:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:05.995 17:24:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:05.995 17:24:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:05.995 17:24:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.995 17:24:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.995 17:24:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:05.995 17:24:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:05.995 17:24:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:05.995 17:24:21 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:05.995 17:24:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:05.995 17:24:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.995 17:24:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:05.995 17:24:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:05.995 17:24:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:05.995 17:24:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.995 17:24:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:05.995 17:24:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.995 17:24:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:05.995 17:24:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:05.995 17:24:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:05.995 17:24:21 -- common/autotest_common.sh@10 -- # set +x 00:32:07.389 17:24:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:07.389 17:24:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:07.390 17:24:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:07.390 17:24:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:07.390 17:24:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:07.390 17:24:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:07.390 17:24:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:07.390 17:24:23 -- nvmf/common.sh@294 -- # net_devs=() 00:32:07.390 17:24:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:07.390 17:24:23 -- nvmf/common.sh@295 -- # e810=() 00:32:07.390 17:24:23 -- nvmf/common.sh@295 -- # local -ga e810 00:32:07.390 17:24:23 -- nvmf/common.sh@296 -- # x722=() 00:32:07.390 17:24:23 -- nvmf/common.sh@296 -- # local -ga x722 00:32:07.390 17:24:23 -- nvmf/common.sh@297 -- # mlx=() 00:32:07.390 17:24:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:07.390 17:24:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.390 17:24:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:07.390 17:24:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:07.390 17:24:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:07.390 17:24:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:07.390 17:24:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:07.390 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:07.390 17:24:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:07.390 17:24:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:07.390 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:07.390 17:24:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:07.390 17:24:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:07.390 17:24:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.390 17:24:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:07.390 17:24:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.390 17:24:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:07.390 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:07.390 17:24:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.390 17:24:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:07.390 17:24:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.390 17:24:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:07.390 17:24:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.390 17:24:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:07.390 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:07.390 17:24:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.390 17:24:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:07.390 17:24:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:07.390 17:24:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:07.390 17:24:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:07.390 17:24:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.390 17:24:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.390 17:24:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.390 17:24:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:07.649 17:24:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.649 17:24:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.649 17:24:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:07.649 17:24:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.649 17:24:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.649 17:24:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:07.649 17:24:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:07.649 17:24:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.649 17:24:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.649 17:24:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.650 17:24:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.650 17:24:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:07.650 17:24:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.650 17:24:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.650 17:24:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.650 17:24:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:07.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:32:07.650 00:32:07.650 --- 10.0.0.2 ping statistics --- 00:32:07.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.650 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:32:07.650 17:24:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:32:07.650 00:32:07.650 --- 10.0.0.1 ping statistics --- 00:32:07.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.650 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:32:07.650 17:24:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.650 17:24:23 -- nvmf/common.sh@410 -- # return 0 00:32:07.650 17:24:23 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:07.650 17:24:23 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:09.026 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:09.026 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:09.026 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:09.962 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:09.962 17:24:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.962 17:24:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:09.962 17:24:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:09.962 17:24:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.962 17:24:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:09.962 17:24:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:09.962 17:24:26 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:09.962 17:24:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:09.962 17:24:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:09.962 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:32:09.962 17:24:26 -- nvmf/common.sh@469 -- # nvmfpid=690405 00:32:09.962 17:24:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:09.962 17:24:26 -- nvmf/common.sh@470 -- # waitforlisten 690405 00:32:09.962 17:24:26 -- common/autotest_common.sh@819 -- # '[' -z 690405 ']' 00:32:09.962 17:24:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.962 17:24:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:09.962 17:24:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.962 17:24:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:09.962 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:32:09.962 [2024-07-20 17:24:26.057082] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:09.962 [2024-07-20 17:24:26.057174] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.962 EAL: No free 2048 kB hugepages reported on node 1 00:32:10.220 [2024-07-20 17:24:26.126887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:10.220 [2024-07-20 17:24:26.216673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:10.220 [2024-07-20 17:24:26.216851] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.220 [2024-07-20 17:24:26.216872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.220 [2024-07-20 17:24:26.216887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.220 [2024-07-20 17:24:26.216949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.220 [2024-07-20 17:24:26.217004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:10.220 [2024-07-20 17:24:26.217122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:10.220 [2024-07-20 17:24:26.217125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.151 17:24:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:11.151 17:24:26 -- common/autotest_common.sh@852 -- # return 0 00:32:11.151 17:24:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:11.151 17:24:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:11.151 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:32:11.151 17:24:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.151 17:24:26 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:11.151 17:24:26 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:11.151 17:24:26 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:11.151 17:24:26 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:11.151 17:24:26 -- scripts/common.sh@312 -- # local nvmes 00:32:11.151 17:24:26 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:11.151 17:24:26 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:11.151 17:24:26 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:11.151 17:24:26 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:11.151 17:24:26 -- scripts/common.sh@322 -- # uname -s 00:32:11.151 17:24:26 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:11.151 17:24:26 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:11.151 17:24:26 -- scripts/common.sh@327 -- # (( 1 )) 00:32:11.151 17:24:26 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:11.151 17:24:26 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:11.151 17:24:26 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:11.151 17:24:26 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:11.151 17:24:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:11.151 17:24:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:11.151 17:24:26 -- common/autotest_common.sh@10 -- # set +x 00:32:11.151 ************************************ 00:32:11.151 START TEST spdk_target_abort 00:32:11.151 ************************************ 00:32:11.151 17:24:27 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:11.151 17:24:27 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:11.151 17:24:27 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:11.151 17:24:27 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:11.151 17:24:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:11.151 17:24:27 -- common/autotest_common.sh@10 -- # set +x 00:32:14.430 spdk_targetn1 00:32:14.431 17:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:14.431 17:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.431 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:32:14.431 [2024-07-20 17:24:29.839972] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:14.431 17:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:14.431 17:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.431 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:32:14.431 17:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:14.431 17:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.431 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:32:14.431 17:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:14.431 17:24:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:14.431 17:24:29 -- common/autotest_common.sh@10 -- # set +x 00:32:14.431 [2024-07-20 17:24:29.872263] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.431 17:24:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:14.431 17:24:29 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:14.431 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.709 Initializing NVMe Controllers 00:32:17.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:17.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:17.709 Initialization complete. Launching workers. 00:32:17.709 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 7322, failed: 0 00:32:17.709 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1230, failed to submit 6092 00:32:17.709 success 933, unsuccess 297, failed 0 00:32:17.709 17:24:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.709 17:24:33 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:17.709 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.982 Initializing NVMe Controllers 00:32:20.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:20.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:20.982 Initialization complete. Launching workers. 00:32:20.982 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8732, failed: 0 00:32:20.982 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1235, failed to submit 7497 00:32:20.982 success 358, unsuccess 877, failed 0 00:32:20.982 17:24:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:20.982 17:24:36 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:20.982 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.505 Initializing NVMe Controllers 00:32:23.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:23.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:23.505 Initialization complete. Launching workers. 00:32:23.505 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31338, failed: 0 00:32:23.505 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2634, failed to submit 28704 00:32:23.505 success 548, unsuccess 2086, failed 0 00:32:23.505 17:24:39 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:23.505 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.505 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:32:23.505 17:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.505 17:24:39 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:23.505 17:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.505 17:24:39 -- common/autotest_common.sh@10 -- # set +x 00:32:24.875 17:24:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.875 17:24:40 -- target/abort_qd_sizes.sh@62 -- # killprocess 690405 00:32:24.875 17:24:40 -- common/autotest_common.sh@926 -- # '[' -z 690405 ']' 00:32:24.875 17:24:40 -- common/autotest_common.sh@930 -- # kill -0 690405 00:32:24.875 17:24:40 -- common/autotest_common.sh@931 -- # uname 00:32:24.876 17:24:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.876 17:24:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 690405 00:32:24.876 17:24:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:24.876 17:24:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:24.876 17:24:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 690405' 00:32:24.876 killing process with pid 690405 00:32:24.876 17:24:40 -- common/autotest_common.sh@945 -- # kill 690405 00:32:24.876 17:24:40 -- common/autotest_common.sh@950 -- # wait 690405 00:32:25.153 00:32:25.153 real 0m14.191s 00:32:25.153 user 0m56.157s 00:32:25.153 sys 0m2.607s 00:32:25.153 17:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.153 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:32:25.153 ************************************ 00:32:25.153 END TEST spdk_target_abort 00:32:25.153 ************************************ 00:32:25.153 17:24:41 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:25.153 17:24:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:25.153 17:24:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:25.153 17:24:41 -- common/autotest_common.sh@10 -- # set +x 00:32:25.153 ************************************ 00:32:25.153 START TEST kernel_target_abort 00:32:25.153 ************************************ 00:32:25.153 17:24:41 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:25.153 17:24:41 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:25.153 17:24:41 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:25.153 17:24:41 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:25.153 17:24:41 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:25.153 17:24:41 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:25.153 17:24:41 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:25.153 17:24:41 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:25.153 17:24:41 -- nvmf/common.sh@627 -- # local block nvme 00:32:25.153 17:24:41 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:25.153 17:24:41 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:25.153 17:24:41 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:25.153 17:24:41 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:26.527 Waiting for block devices as requested 00:32:26.527 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:26.527 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:26.527 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:26.527 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:26.527 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:26.786 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:26.786 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.786 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.786 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.786 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:27.043 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:27.043 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:27.043 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:27.043 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:27.300 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:27.300 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:27.300 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:27.558 17:24:43 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:27.558 17:24:43 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:27.558 17:24:43 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:27.558 17:24:43 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:27.558 17:24:43 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:27.558 No valid GPT data, bailing 00:32:27.558 17:24:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:27.558 17:24:43 -- scripts/common.sh@393 -- # pt= 00:32:27.558 17:24:43 -- scripts/common.sh@394 -- # return 1 00:32:27.558 17:24:43 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:27.558 17:24:43 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:27.558 17:24:43 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:27.558 17:24:43 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:27.558 17:24:43 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:27.558 17:24:43 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:27.558 17:24:43 -- nvmf/common.sh@654 -- # echo 1 00:32:27.558 17:24:43 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:27.558 17:24:43 -- nvmf/common.sh@656 -- # echo 1 00:32:27.558 17:24:43 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:27.558 17:24:43 -- nvmf/common.sh@663 -- # echo tcp 00:32:27.558 17:24:43 -- nvmf/common.sh@664 -- # echo 4420 00:32:27.558 17:24:43 -- nvmf/common.sh@665 -- # echo ipv4 00:32:27.558 17:24:43 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:27.558 17:24:43 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:27.558 00:32:27.558 Discovery Log Number of Records 2, Generation counter 2 00:32:27.558 =====Discovery Log Entry 0====== 00:32:27.558 trtype: tcp 00:32:27.558 adrfam: ipv4 00:32:27.558 subtype: current discovery subsystem 00:32:27.558 treq: not specified, sq flow control disable supported 00:32:27.558 portid: 1 00:32:27.558 trsvcid: 4420 00:32:27.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:27.558 traddr: 10.0.0.1 00:32:27.558 eflags: none 00:32:27.558 sectype: none 00:32:27.558 =====Discovery Log Entry 1====== 00:32:27.558 trtype: tcp 00:32:27.558 adrfam: ipv4 00:32:27.558 subtype: nvme subsystem 00:32:27.558 treq: not specified, sq flow control disable supported 00:32:27.558 portid: 1 00:32:27.558 trsvcid: 4420 00:32:27.558 subnqn: kernel_target 00:32:27.558 traddr: 10.0.0.1 00:32:27.558 eflags: none 00:32:27.558 sectype: none 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:27.558 17:24:43 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:27.558 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.835 Initializing NVMe Controllers 00:32:30.835 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:30.835 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:30.835 Initialization complete. Launching workers. 00:32:30.835 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 21443, failed: 0 00:32:30.835 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 21443, failed to submit 0 00:32:30.835 success 0, unsuccess 21443, failed 0 00:32:30.835 17:24:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:30.835 17:24:46 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:30.835 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.110 Initializing NVMe Controllers 00:32:34.110 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:34.110 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:34.110 Initialization complete. Launching workers. 00:32:34.110 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 46699, failed: 0 00:32:34.110 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 11746, failed to submit 34953 00:32:34.110 success 0, unsuccess 11746, failed 0 00:32:34.110 17:24:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:34.110 17:24:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:34.110 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.644 Initializing NVMe Controllers 00:32:36.644 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:36.644 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:36.644 Initialization complete. Launching workers. 00:32:36.644 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 45800, failed: 0 00:32:36.644 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 11410, failed to submit 34390 00:32:36.644 success 0, unsuccess 11410, failed 0 00:32:36.644 17:24:52 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:36.644 17:24:52 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:36.644 17:24:52 -- nvmf/common.sh@677 -- # echo 0 00:32:36.902 17:24:52 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:36.902 17:24:52 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:36.902 17:24:52 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:36.902 17:24:52 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:36.902 17:24:52 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:36.902 17:24:52 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:36.902 00:32:36.902 real 0m11.621s 00:32:36.902 user 0m3.173s 00:32:36.902 sys 0m2.552s 00:32:36.902 17:24:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:36.902 17:24:52 -- common/autotest_common.sh@10 -- # set +x 00:32:36.902 ************************************ 00:32:36.902 END TEST kernel_target_abort 00:32:36.902 ************************************ 00:32:36.902 17:24:52 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:36.902 17:24:52 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:36.902 17:24:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:36.902 17:24:52 -- nvmf/common.sh@116 -- # sync 00:32:36.902 17:24:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:36.902 17:24:52 -- nvmf/common.sh@119 -- # set +e 00:32:36.902 17:24:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:36.902 17:24:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:36.902 rmmod nvme_tcp 00:32:36.902 rmmod nvme_fabrics 00:32:36.902 rmmod nvme_keyring 00:32:36.902 17:24:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:36.902 17:24:52 -- nvmf/common.sh@123 -- # set -e 00:32:36.902 17:24:52 -- nvmf/common.sh@124 -- # return 0 00:32:36.902 17:24:52 -- nvmf/common.sh@477 -- # '[' -n 690405 ']' 00:32:36.902 17:24:52 -- nvmf/common.sh@478 -- # killprocess 690405 00:32:36.902 17:24:52 -- common/autotest_common.sh@926 -- # '[' -z 690405 ']' 00:32:36.902 17:24:52 -- common/autotest_common.sh@930 -- # kill -0 690405 00:32:36.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (690405) - No such process 00:32:36.902 17:24:52 -- common/autotest_common.sh@953 -- # echo 'Process with pid 690405 is not found' 00:32:36.902 Process with pid 690405 is not found 00:32:36.902 17:24:52 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:36.902 17:24:52 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:37.835 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:38.092 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:38.092 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:38.092 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:38.092 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:38.092 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:38.092 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:38.092 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:38.092 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:38.092 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:38.092 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:38.092 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:38.092 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:38.092 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:38.092 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:38.092 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:38.092 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:38.092 17:24:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:38.092 17:24:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:38.092 17:24:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:38.092 17:24:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:38.092 17:24:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.092 17:24:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:38.092 17:24:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.618 17:24:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:40.618 00:32:40.618 real 0m34.700s 00:32:40.618 user 1m1.524s 00:32:40.618 sys 0m8.379s 00:32:40.618 17:24:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.618 17:24:56 -- common/autotest_common.sh@10 -- # set +x 00:32:40.618 ************************************ 00:32:40.618 END TEST nvmf_abort_qd_sizes 00:32:40.618 ************************************ 00:32:40.618 17:24:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:40.618 17:24:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:40.618 17:24:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:40.618 17:24:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:40.618 17:24:56 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:40.618 17:24:56 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:40.618 17:24:56 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:40.618 17:24:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:40.618 17:24:56 -- common/autotest_common.sh@10 -- # set +x 00:32:40.618 17:24:56 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:40.618 17:24:56 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:40.618 17:24:56 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:40.618 17:24:56 -- common/autotest_common.sh@10 -- # set +x 00:32:41.989 INFO: APP EXITING 00:32:41.989 INFO: killing all VMs 00:32:41.989 INFO: killing vhost app 00:32:41.989 INFO: EXIT DONE 00:32:43.388 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:43.388 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:43.388 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:43.388 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:43.388 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:43.388 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:43.388 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:43.388 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:43.388 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:43.388 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:43.388 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:43.388 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:43.388 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:43.388 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:43.388 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:43.388 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:43.388 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:44.762 Cleaning 00:32:44.762 Removing: /var/run/dpdk/spdk0/config 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:44.762 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:44.762 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:44.762 Removing: /var/run/dpdk/spdk1/config 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:44.762 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:44.762 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:44.762 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:44.762 Removing: /var/run/dpdk/spdk2/config 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:44.762 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:44.762 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:44.762 Removing: /var/run/dpdk/spdk3/config 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:44.762 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:44.762 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:44.763 Removing: /var/run/dpdk/spdk4/config 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:44.763 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:44.763 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:44.763 Removing: /dev/shm/bdev_svc_trace.1 00:32:44.763 Removing: /dev/shm/nvmf_trace.0 00:32:44.763 Removing: /dev/shm/spdk_tgt_trace.pid416037 00:32:44.763 Removing: /var/run/dpdk/spdk0 00:32:44.763 Removing: /var/run/dpdk/spdk1 00:32:44.763 Removing: /var/run/dpdk/spdk2 00:32:44.763 Removing: /var/run/dpdk/spdk3 00:32:44.763 Removing: /var/run/dpdk/spdk4 00:32:44.763 Removing: /var/run/dpdk/spdk_pid414334 00:32:44.763 Removing: /var/run/dpdk/spdk_pid415094 00:32:44.763 Removing: /var/run/dpdk/spdk_pid416037 00:32:44.763 Removing: /var/run/dpdk/spdk_pid416512 00:32:44.763 Removing: /var/run/dpdk/spdk_pid417731 00:32:44.763 Removing: /var/run/dpdk/spdk_pid418674 00:32:44.763 Removing: /var/run/dpdk/spdk_pid418947 00:32:44.763 Removing: /var/run/dpdk/spdk_pid419181 00:32:44.763 Removing: /var/run/dpdk/spdk_pid419514 00:32:44.763 Removing: /var/run/dpdk/spdk_pid419710 00:32:44.763 Removing: /var/run/dpdk/spdk_pid419873 00:32:44.763 Removing: /var/run/dpdk/spdk_pid420029 00:32:44.763 Removing: /var/run/dpdk/spdk_pid420213 00:32:44.763 Removing: /var/run/dpdk/spdk_pid420670 00:32:44.763 Removing: /var/run/dpdk/spdk_pid423198 00:32:44.763 Removing: /var/run/dpdk/spdk_pid423375 00:32:44.763 Removing: /var/run/dpdk/spdk_pid423674 00:32:44.763 Removing: /var/run/dpdk/spdk_pid423816 00:32:44.763 Removing: /var/run/dpdk/spdk_pid424130 00:32:44.763 Removing: /var/run/dpdk/spdk_pid424269 00:32:44.763 Removing: /var/run/dpdk/spdk_pid424605 00:32:44.763 Removing: /var/run/dpdk/spdk_pid424724 00:32:44.763 Removing: /var/run/dpdk/spdk_pid425016 00:32:44.763 Removing: /var/run/dpdk/spdk_pid425158 00:32:44.763 Removing: /var/run/dpdk/spdk_pid425324 00:32:44.763 Removing: /var/run/dpdk/spdk_pid425396 00:32:44.763 Removing: /var/run/dpdk/spdk_pid425838 00:32:44.763 Removing: /var/run/dpdk/spdk_pid425993 00:32:44.763 Removing: /var/run/dpdk/spdk_pid426187 00:32:44.763 Removing: /var/run/dpdk/spdk_pid426368 00:32:44.763 Removing: /var/run/dpdk/spdk_pid426511 00:32:44.763 Removing: /var/run/dpdk/spdk_pid426570 00:32:44.763 Removing: /var/run/dpdk/spdk_pid426715 00:32:44.763 Removing: /var/run/dpdk/spdk_pid426991 00:32:44.763 Removing: /var/run/dpdk/spdk_pid427138 00:32:44.763 Removing: /var/run/dpdk/spdk_pid427291 00:32:44.763 Removing: /var/run/dpdk/spdk_pid427436 00:32:44.763 Removing: /var/run/dpdk/spdk_pid427717 00:32:44.763 Removing: /var/run/dpdk/spdk_pid427860 00:32:44.763 Removing: /var/run/dpdk/spdk_pid428020 00:32:44.763 Removing: /var/run/dpdk/spdk_pid428165 00:32:44.763 Removing: /var/run/dpdk/spdk_pid428437 00:32:44.763 Removing: /var/run/dpdk/spdk_pid428588 00:32:44.763 Removing: /var/run/dpdk/spdk_pid428741 00:32:44.763 Removing: /var/run/dpdk/spdk_pid428891 00:32:44.763 Removing: /var/run/dpdk/spdk_pid429164 00:32:44.763 Removing: /var/run/dpdk/spdk_pid429305 00:32:44.763 Removing: /var/run/dpdk/spdk_pid429469 00:32:44.763 Removing: /var/run/dpdk/spdk_pid429611 00:32:44.763 Removing: /var/run/dpdk/spdk_pid429893 00:32:44.763 Removing: /var/run/dpdk/spdk_pid430031 00:32:44.763 Removing: /var/run/dpdk/spdk_pid430191 00:32:44.763 Removing: /var/run/dpdk/spdk_pid430339 00:32:44.763 Removing: /var/run/dpdk/spdk_pid430572 00:32:44.763 Removing: /var/run/dpdk/spdk_pid430758 00:32:44.763 Removing: /var/run/dpdk/spdk_pid430917 00:32:44.763 Removing: /var/run/dpdk/spdk_pid431056 00:32:44.763 Removing: /var/run/dpdk/spdk_pid431246 00:32:44.763 Removing: /var/run/dpdk/spdk_pid431479 00:32:44.763 Removing: /var/run/dpdk/spdk_pid431643 00:32:44.763 Removing: /var/run/dpdk/spdk_pid431784 00:32:44.763 Removing: /var/run/dpdk/spdk_pid431948 00:32:44.763 Removing: /var/run/dpdk/spdk_pid432207 00:32:44.763 Removing: /var/run/dpdk/spdk_pid432360 00:32:44.763 Removing: /var/run/dpdk/spdk_pid432569 00:32:44.763 Removing: /var/run/dpdk/spdk_pid432860 00:32:44.763 Removing: /var/run/dpdk/spdk_pid433050 00:32:44.763 Removing: /var/run/dpdk/spdk_pid433212 00:32:44.763 Removing: /var/run/dpdk/spdk_pid433352 00:32:44.763 Removing: /var/run/dpdk/spdk_pid433956 00:32:44.763 Removing: /var/run/dpdk/spdk_pid434287 00:32:44.763 Removing: /var/run/dpdk/spdk_pid434447 00:32:44.763 Removing: /var/run/dpdk/spdk_pid434635 00:32:44.763 Removing: /var/run/dpdk/spdk_pid434838 00:32:44.763 Removing: /var/run/dpdk/spdk_pid437030 00:32:44.763 Removing: /var/run/dpdk/spdk_pid491639 00:32:44.763 Removing: /var/run/dpdk/spdk_pid494410 00:32:44.763 Removing: /var/run/dpdk/spdk_pid501974 00:32:44.763 Removing: /var/run/dpdk/spdk_pid505274 00:32:44.763 Removing: /var/run/dpdk/spdk_pid508023 00:32:44.763 Removing: /var/run/dpdk/spdk_pid508564 00:32:44.763 Removing: /var/run/dpdk/spdk_pid512325 00:32:44.763 Removing: /var/run/dpdk/spdk_pid512327 00:32:44.763 Removing: /var/run/dpdk/spdk_pid512998 00:32:44.763 Removing: /var/run/dpdk/spdk_pid513670 00:32:44.763 Removing: /var/run/dpdk/spdk_pid514231 00:32:44.763 Removing: /var/run/dpdk/spdk_pid514767 00:32:44.763 Removing: /var/run/dpdk/spdk_pid514778 00:32:44.763 Removing: /var/run/dpdk/spdk_pid514997 00:32:44.763 Removing: /var/run/dpdk/spdk_pid515057 00:32:44.763 Removing: /var/run/dpdk/spdk_pid515083 00:32:44.763 Removing: /var/run/dpdk/spdk_pid515732 00:32:44.763 Removing: /var/run/dpdk/spdk_pid516415 00:32:44.763 Removing: /var/run/dpdk/spdk_pid517092 00:32:44.763 Removing: /var/run/dpdk/spdk_pid517495 00:32:44.763 Removing: /var/run/dpdk/spdk_pid517509 00:32:44.763 Removing: /var/run/dpdk/spdk_pid517650 00:32:44.763 Removing: /var/run/dpdk/spdk_pid518695 00:32:44.763 Removing: /var/run/dpdk/spdk_pid519561 00:32:44.763 Removing: /var/run/dpdk/spdk_pid525654 00:32:44.763 Removing: /var/run/dpdk/spdk_pid525939 00:32:45.022 Removing: /var/run/dpdk/spdk_pid528614 00:32:45.022 Removing: /var/run/dpdk/spdk_pid532374 00:32:45.022 Removing: /var/run/dpdk/spdk_pid534607 00:32:45.022 Removing: /var/run/dpdk/spdk_pid541122 00:32:45.022 Removing: /var/run/dpdk/spdk_pid546564 00:32:45.022 Removing: /var/run/dpdk/spdk_pid547870 00:32:45.022 Removing: /var/run/dpdk/spdk_pid548555 00:32:45.022 Removing: /var/run/dpdk/spdk_pid558909 00:32:45.022 Removing: /var/run/dpdk/spdk_pid561177 00:32:45.022 Removing: /var/run/dpdk/spdk_pid564623 00:32:45.022 Removing: /var/run/dpdk/spdk_pid565817 00:32:45.022 Removing: /var/run/dpdk/spdk_pid567305 00:32:45.022 Removing: /var/run/dpdk/spdk_pid567331 00:32:45.022 Removing: /var/run/dpdk/spdk_pid567596 00:32:45.022 Removing: /var/run/dpdk/spdk_pid567751 00:32:45.022 Removing: /var/run/dpdk/spdk_pid568338 00:32:45.022 Removing: /var/run/dpdk/spdk_pid569714 00:32:45.022 Removing: /var/run/dpdk/spdk_pid570650 00:32:45.022 Removing: /var/run/dpdk/spdk_pid571171 00:32:45.022 Removing: /var/run/dpdk/spdk_pid574664 00:32:45.022 Removing: /var/run/dpdk/spdk_pid578113 00:32:45.022 Removing: /var/run/dpdk/spdk_pid581758 00:32:45.022 Removing: /var/run/dpdk/spdk_pid605269 00:32:45.022 Removing: /var/run/dpdk/spdk_pid607977 00:32:45.022 Removing: /var/run/dpdk/spdk_pid611834 00:32:45.022 Removing: /var/run/dpdk/spdk_pid612930 00:32:45.023 Removing: /var/run/dpdk/spdk_pid614058 00:32:45.023 Removing: /var/run/dpdk/spdk_pid616634 00:32:45.023 Removing: /var/run/dpdk/spdk_pid619164 00:32:45.023 Removing: /var/run/dpdk/spdk_pid623535 00:32:45.023 Removing: /var/run/dpdk/spdk_pid623556 00:32:45.023 Removing: /var/run/dpdk/spdk_pid627006 00:32:45.023 Removing: /var/run/dpdk/spdk_pid627232 00:32:45.023 Removing: /var/run/dpdk/spdk_pid627376 00:32:45.023 Removing: /var/run/dpdk/spdk_pid627647 00:32:45.023 Removing: /var/run/dpdk/spdk_pid627654 00:32:45.023 Removing: /var/run/dpdk/spdk_pid628758 00:32:45.023 Removing: /var/run/dpdk/spdk_pid629981 00:32:45.023 Removing: /var/run/dpdk/spdk_pid631251 00:32:45.023 Removing: /var/run/dpdk/spdk_pid632533 00:32:45.023 Removing: /var/run/dpdk/spdk_pid633759 00:32:45.023 Removing: /var/run/dpdk/spdk_pid634972 00:32:45.023 Removing: /var/run/dpdk/spdk_pid638856 00:32:45.023 Removing: /var/run/dpdk/spdk_pid639318 00:32:45.023 Removing: /var/run/dpdk/spdk_pid640636 00:32:45.023 Removing: /var/run/dpdk/spdk_pid641392 00:32:45.023 Removing: /var/run/dpdk/spdk_pid645176 00:32:45.023 Removing: /var/run/dpdk/spdk_pid647234 00:32:45.023 Removing: /var/run/dpdk/spdk_pid650846 00:32:45.023 Removing: /var/run/dpdk/spdk_pid655067 00:32:45.023 Removing: /var/run/dpdk/spdk_pid658745 00:32:45.023 Removing: /var/run/dpdk/spdk_pid659168 00:32:45.023 Removing: /var/run/dpdk/spdk_pid659586 00:32:45.023 Removing: /var/run/dpdk/spdk_pid660008 00:32:45.023 Removing: /var/run/dpdk/spdk_pid660598 00:32:45.023 Removing: /var/run/dpdk/spdk_pid661144 00:32:45.023 Removing: /var/run/dpdk/spdk_pid661587 00:32:45.023 Removing: /var/run/dpdk/spdk_pid662132 00:32:45.023 Removing: /var/run/dpdk/spdk_pid664806 00:32:45.023 Removing: /var/run/dpdk/spdk_pid664957 00:32:45.023 Removing: /var/run/dpdk/spdk_pid668811 00:32:45.023 Removing: /var/run/dpdk/spdk_pid668987 00:32:45.023 Removing: /var/run/dpdk/spdk_pid670635 00:32:45.023 Removing: /var/run/dpdk/spdk_pid675789 00:32:45.023 Removing: /var/run/dpdk/spdk_pid675794 00:32:45.023 Removing: /var/run/dpdk/spdk_pid678852 00:32:45.023 Removing: /var/run/dpdk/spdk_pid680290 00:32:45.023 Removing: /var/run/dpdk/spdk_pid681609 00:32:45.023 Removing: /var/run/dpdk/spdk_pid682488 00:32:45.023 Removing: /var/run/dpdk/spdk_pid683937 00:32:45.023 Removing: /var/run/dpdk/spdk_pid684821 00:32:45.023 Removing: /var/run/dpdk/spdk_pid690843 00:32:45.023 Removing: /var/run/dpdk/spdk_pid691247 00:32:45.023 Removing: /var/run/dpdk/spdk_pid691650 00:32:45.023 Removing: /var/run/dpdk/spdk_pid693120 00:32:45.023 Removing: /var/run/dpdk/spdk_pid693532 00:32:45.023 Removing: /var/run/dpdk/spdk_pid693941 00:32:45.023 Clean 00:32:45.023 killing process with pid 385419 00:32:53.132 killing process with pid 385416 00:32:53.132 killing process with pid 385418 00:32:53.132 killing process with pid 385417 00:32:53.132 17:25:09 -- common/autotest_common.sh@1436 -- # return 0 00:32:53.132 17:25:09 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:53.132 17:25:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:53.132 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:32:53.132 17:25:09 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:53.132 17:25:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:53.132 17:25:09 -- common/autotest_common.sh@10 -- # set +x 00:32:53.132 17:25:09 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:53.132 17:25:09 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:53.132 17:25:09 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:53.132 17:25:09 -- spdk/autotest.sh@394 -- # hash lcov 00:32:53.132 17:25:09 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:53.132 17:25:09 -- spdk/autotest.sh@396 -- # hostname 00:32:53.132 17:25:09 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:53.391 geninfo: WARNING: invalid characters removed from testname! 00:33:19.918 17:25:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.097 17:25:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:26.621 17:25:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:29.147 17:25:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:31.675 17:25:47 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:34.958 17:25:50 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.489 17:25:53 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:37.489 17:25:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.489 17:25:53 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:37.489 17:25:53 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.489 17:25:53 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.489 17:25:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.489 17:25:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.489 17:25:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.489 17:25:53 -- paths/export.sh@5 -- $ export PATH 00:33:37.489 17:25:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.489 17:25:53 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:37.489 17:25:53 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:37.489 17:25:53 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1721489153.XXXXXX 00:33:37.489 17:25:53 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1721489153.fsbhBE 00:33:37.489 17:25:53 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:37.489 17:25:53 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:33:37.489 17:25:53 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:37.489 17:25:53 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:37.489 17:25:53 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:37.489 17:25:53 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:37.489 17:25:53 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:37.489 17:25:53 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:37.489 17:25:53 -- common/autotest_common.sh@10 -- $ set +x 00:33:37.489 17:25:53 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:37.489 17:25:53 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:37.489 17:25:53 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.489 17:25:53 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:37.489 17:25:53 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:37.489 17:25:53 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:37.489 17:25:53 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:37.489 17:25:53 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:37.489 17:25:53 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:37.489 17:25:53 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:37.489 17:25:53 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:37.489 + [[ -n 330331 ]] 00:33:37.489 + sudo kill 330331 00:33:37.497 [Pipeline] } 00:33:37.511 [Pipeline] // stage 00:33:37.516 [Pipeline] } 00:33:37.529 [Pipeline] // timeout 00:33:37.533 [Pipeline] } 00:33:37.546 [Pipeline] // catchError 00:33:37.551 [Pipeline] } 00:33:37.566 [Pipeline] // wrap 00:33:37.571 [Pipeline] } 00:33:37.584 [Pipeline] // catchError 00:33:37.591 [Pipeline] stage 00:33:37.592 [Pipeline] { (Epilogue) 00:33:37.601 [Pipeline] catchError 00:33:37.602 [Pipeline] { 00:33:37.610 [Pipeline] echo 00:33:37.611 Cleanup processes 00:33:37.615 [Pipeline] sh 00:33:37.890 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.890 705971 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.904 [Pipeline] sh 00:33:38.182 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:38.183 ++ grep -v 'sudo pgrep' 00:33:38.183 ++ awk '{print $1}' 00:33:38.183 + sudo kill -9 00:33:38.183 + true 00:33:38.194 [Pipeline] sh 00:33:38.491 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:48.472 [Pipeline] sh 00:33:48.749 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:48.750 Artifacts sizes are good 00:33:48.764 [Pipeline] archiveArtifacts 00:33:48.770 Archiving artifacts 00:33:48.958 [Pipeline] sh 00:33:49.237 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:49.254 [Pipeline] cleanWs 00:33:49.264 [WS-CLEANUP] Deleting project workspace... 00:33:49.264 [WS-CLEANUP] Deferred wipeout is used... 00:33:49.270 [WS-CLEANUP] done 00:33:49.271 [Pipeline] } 00:33:49.287 [Pipeline] // catchError 00:33:49.301 [Pipeline] sh 00:33:49.595 + logger -p user.info -t JENKINS-CI 00:33:49.610 [Pipeline] } 00:33:49.625 [Pipeline] // stage 00:33:49.630 [Pipeline] } 00:33:49.646 [Pipeline] // node 00:33:49.651 [Pipeline] End of Pipeline 00:33:49.755 Finished: SUCCESS